doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1506.06724 | 60 | American.Psycho r ¥ , Y [00:13:29:00:13:33] Lady, if you don't shut your fucking mouth, | will kill you. Fight.Club | have your license. | know who you are. | know where you live. I'm keeping your license, and I'm going to check on you, mister Raymond K. Hessel. In three months, and then in six months, and then in a year, and if you aren't back in school on your way to being a veterinarian, you will be dead. You didn't say anything.
Harry.Potter.and.the.Sorcerers.Stone (00:05:46:00;:05:48] I'm warning you now, boy Fight.Club You didn't say anything. Get out of here, and do your little life, but remember I'm watching you, Raymond Hessel, and I'd rather kill you than see you working a shit job for just enough money to buy cheese and watch television. Now, I'm going to walk away so don't turn around. | 1506.06724#60 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 61 | Batman.Begins «\ 2. (02:06:23:02:06:26] - I'm sorry | didn't tell you, Rachel. - No. No, Bruce... Bane Chronicles-2 "She has graciously allowed me into her confidence." Magnus could read between the lines. Axel didn't kiss and tell, which made him only more attractive. âThe escape is to be made on Sunday," Alex went on. "The plan is simple, but exacting. We have arranged it so the guards have seen certain people leaving by certain exits at certain times. On ...
ay! it Batman.Begins â¢~ ~ A) (01:38:41:01:38:44] I'm gonna give you a sedative. You'll wake up back at home. A Captive s Submission â| believe you will enjoy your time here. | am not a harsh master but | am strict. When we are with others, | expect you to present yourself properly. What we do here in your room and in the dungeon is between you and |. It is a testament to the trust and respect we have for each other and no one else needs to Know about our arrangement. I'm sure the past few days have been overwhelming thus far but I have tried to give you as much information as possible. Do you have any questions?" | 1506.06724#61 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 62 | (00:30:16:00:30:19] Prolemuris. They're not aggressive. Adventures of Tom Bombadil Of crystal was his habergeon, his scabbard of chalcedony; with silver tipped at plenilune his spear was hewn of ebony. His javelins were of malachite and stalactite - he brandished them, and went and fought the dragon-flies of Paradise, and vanquished them. He battled with the Dumbledors, the Hummerhorns, and Honeybees, and won the Golden Honeycomb; and running home on sunny seas in ship of leaves and gossamer with blossom for a canopy, he sat...
Batman.Begins [01:09:31:01:09:34] I'm going to have to ask you to leave. A Dirty Job "This says 'Purveyor of Fine Vintage Clothing and Accessories." "Right! Exactly!" He knew he should have had a second set of business cards printed up. "And where do you think | get those things? From the dead. You see?" âMr. Asher, I'm going to have to ask you to leave." | 1506.06724#62 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 64 | Fight Club The Green Mile Harry Potter and the Sorcerers Stone American Psycho One Flew Over the Cuckoo Nest Shawshank Redemption The Firm Brokeback Mountain The Road AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall AP Recall UNI 1.22 2.36 0.00 0.00 0.00 0.00 0.00 0.27 0.00 1.01 0.00 1.79 0.05 1.38 2.36 27.0 0.00 1.12 0.00 1.12 SVM 0.73 10.38 14.05 51.42 10.30 44.35 14.78 34.25 5.68 25.25 8.94 46.43 4.46 18.62 24.91 74.00 13.77 41.90 12.11 33.46 â
0.45 12.26 14.12 62.46 8.09 51.05 16.76 67.12 8.14 41.41 8.60 78.57 7.91 33.79 16.55 88.00 6.58 43.02 9.00 48.90 BLEU 0.41 12.74 14.09 60.57 8.18 52.30 17.22 66.58 6.27 34.34 8.89 76.79 | 1506.06724#64 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 65 | BLEU 0.41 12.74 14.09 60.57 8.18 52.30 17.22 66.58 6.27 34.34 8.89 76.79 8.66 36.55 17.82 92.00 7.83 48.04 9.39 49.63 1 layer CNN w/o one feature BOOK 0.50 11.79 10.12 57.10 7.84 48.54 14.88 64.66 8.49 36.36 7.99 73.21 6.22 23.45 15.16 86.00 5.11 38.55 9.40 47.79 TF-IDF 0.40 11.79 6.92 53.94 5.66 46.03 12.29 60.82 1.93 32.32 4.35 73.21 2.02 26.90 14.60 86.00 3.04 32.96 8.22 46.69 VIS 0.64 12.74 9.83 55.52 7.95 48.54 14.95 63.56 8.51 37.37 8.91 78.57 7.15 26.90 15.58 88.00 5.47 37.99 9.35 51.10 SCENE 0.50 11.79 13.00 60.57 8.04 49.37 15.68 66.58 9.32 36.36 9.22 75.00 7.25 | 1506.06724#65 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 66 | 0.50 11.79 13.00 60.57 8.04 49.37 15.68 66.58 9.32 36.36 9.22 75.00 7.25 30.34 15.41 86.00 6.09 42.46 8.63 49.26 PRIOR 0.48 11.79 14.42 62.78 8.20 52.72 16.54 67.67 9.04 40.40 7.86 78.57 7.26 31.03 16.21 87.00 7.00 44.13 9.40 48.53 CNN-3 1.95 17.92 28.80 74.13 27.17 76.57 34.32 81.92 14.83 49.49 19.33 94.64 18.34 37.93 31.80 98.00 19.80 65.36 28.75 71.69 CRF 5.17 19.81 27.60 78.23 23.65 78.66 32.87 80.27 21.13 54.55 19.96 96.79 20.74 44.83 30.58 100.00 19.58 65.10 30.45 72.79 No Country for Old Men Mean Recall AP 3.88 0.40 38.01 10.97 52.66 9.62 52.95 9.88 47.07 5.94 48.75 8.57 50.03 8.83 | 1506.06724#66 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 68 | 66.77 22.51 Table 4: Performance of our model for the movies in our dataset under different settings and metrics.
Per movie-book pair BLEU 6h TF 10 min BOOK 3 min VIS 2h SCENE 1h CNN (training) 3 min CNN (inference) 0.2 min CRF (training) 5h CRF (inference) 5 min
Table 5: Running time for our model per one movie/book pair.
# 5.3. Book âRetrievalâ
# 6. Conclusion
In this experiment, we compute alignment between a movie and all (test) 10 books, and check whether our model retrieves the correct book. Results are shown in Table 6. Under each book we show the computed similarity. In par- ticular, we use the energy from the CRF, and scale all sim- ilarities relative to the highest one (100). Notice that our model retrieves the correct book for each movie.
Describing a movie via other books. We can also cap- tion movies by matching shots to paragraphs in a corpus of books. Here we do not encourage a linear timeline (CRF) since the stories are unrelated, and we only match at the lo- cal, shot-paragraph level. We show a description for Amer- ican Psycho borrowed from the book Fight Club in Fig. 5. | 1506.06724#68 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 69 | In this paper, we explored a new problem of aligning a book to its movie release. We proposed an approach that computes several similarities between shots and di- alogs and the sentences in the book. We exploited our new sentence embedding in order to compute similarities be- tween sentences. We further extended the image-text neural embeddings to video, and proposed a context-aware align- ment model that takes into account all the available simi- larity information. We showed results on a new dataset of movie/book alignments as well as several quantitative re- sults that showcase the power and potential of our approach.
# Acknowledgments
# 5.4. The CoCoBook: Writing Stories for CoCo | 1506.06724#69 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 70 | # Acknowledgments
# 5.4. The CoCoBook: Writing Stories for CoCo
Our next experiment shows that our model is able to âgenerateâ descriptive stories for (static) images. In par- ticular we used the image-text embedding from [13] and generated a simple caption for an image. We used this cap- tion as a query, and used our sentence embedding trained on books to ï¬nd top 10 nearest sentences (sampled from a few hundred thousand from BookCorpus). We re-ranked these based on the 1-gram precision of non-stop words. Given the best result, we return the sentence as well as the 2 sentences before and after it in the book. The results are in Fig. 6. Our sentence embedding is able to retrieve semantically mean- ingful stories to explain the images.
We acknowledge the support from NSERC, CIFAR, Samsung, Google, and ONR-N00014-14-1-0232. We also thank Lea Jen- sterle for helping us with elaborate annotation, and Relu Patrascu for his help with numerous infrastructure related problems.
# Appendix
In the Appendix we provide more qualitative results.
# A. Qualitative Movie-Book Alignment Results | 1506.06724#70 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 72 | the club was a little emptier than i would have expected for the late afternoon , and the bartender , in red waistcoat and bowtie , was busy wiping down his counter , replacing peanuts and putting out new coasters . a television with the latest la liga news was hung in an upper corner , and behind him , rows of bottles were reï¬ected in a giant bar mirror . above the stools , a pergola-type overhead structure held rows of wine glasses . it was a classy place , with ferns in the corner , and not the kind of bar to which i was accustomed . my places usually had a more ... relaxed feel . he felt like an idiot for yelling at the child , but his frustration and trepidation was getting the better of him . he glanced toward the shadowed hall and quickly nodded toward melissa before making his way forward . he came across more children sitting upon a couch in the living room . they watched him , but did nât move and did nât speak . his skin started to feel like hundreds of tiny spiders were running up and down it and he hurried on . | 1506.06724#72 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 73 | a few miles before tioga road reached highway 395 and the town of lee vining , smith turned onto a narrow blacktop road . on either side were parched , grassy open slopes with barbed-wire fences marking property lines . cattle and horses grazed under trees whose black silhouettes stood stark against the gold-velvet mountains . marty burst into song : â home , home on the range , where the deer and the antelope play ! where seldom is heard a discouraging word and the skies are not cloudy all day ! â
ânumber seventy-three , second to last from the corner . â adam slowed the porsche as he approached the quaint-he could think of no other word to use , even though âquaintâ was one he normally , manfully , avoided-townhouse , coming to a halt beside a sleek jaguar sedan . it was a quiet street , devoid of trafï¬c at this hour on a monday night . in the bluish-tinted light of a corner street lamp , he developed a quick visual impression of wrought-iron railings on tidy front stoops , window boxes full of bright chrysanthemums , beveled glass in bay windows , and lace curtains . townhouses around here didnât rent cheaply , he could nât help but observe . | 1506.06724#73 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 75 | Figure 7: Alignment results of our model (bottom) compared to ground-truth alignment (top). In ground-truth, blue lines indicate visual matches, and magenta are the dialog matches. Yellow lines indicate predicted alignments.
We can see that some dialogs in the movies closely fol- low the book and thus help with the alignment. This is particularly important since the visual information is not as strong. Since the text around the dialogs typically describe the scene, the dialogs thus help us ground the visual infor- mation contained in the description and the video.
# B. Borrowing âLinesâ from Other Books
We show a few qualitative examples of top-scoring matches for shot in a movie with a paragraph in another book (a book that does not correspond to this movie).
In this experiment, we allow a clip in our 10 movie dataset (excluding the training movie) to match to paragraphs in the remaining 9 books (excluding the corresponding book). The results are in Fig. 12. Note that the top-scoring matches chosen from only a small set of books may not be too meaningful.
200 book experiment. We scale the experiment by ran- domly selecting 200 books from our BookCorpus. The re- sults are in Fig. 15. One can see that by using many more books results in increasingly better âstoriesâ.
American Psycho
American Psycho
# American Psycho
Harry Potter | 1506.06724#75 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 76 | American Psycho
American Psycho
# American Psycho
Harry Potter
Figure 8: Examples of movie-book alignment. We use our model to align a movie to a book. Then for a chosen shot (which is a node in our CRF) we show the corresponding paragraph, plus one before and one after, in the book inferred by our model. On the left we show one (central) frame from the shot along with the subtitle sentence(s) that overlap with the shot. Some dialogs in the movie closely follow the book and thus help with the alignment.
One Flew Over the Cuckooâs Nest
One Flew Over the Cuckooâs Nest
Shawshank Redemption
Figure 9: Examples of movie-book alignment. We use our model to align a movie to a book. Then for a chosen shot (which is a node in our CRF) we show the corresponding paragraph, plus one before and one after, in the book inferred by our model. On the left we show one (central) frame from the shot along with the subtitle sentence(s) that overlap with the shot. Some dialogs in the movie closely follow the book and thus help with the alignment.
The Firm
The Firm
The Firm | 1506.06724#76 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 77 | The Firm
The Firm
The Firm
Figure 10: Examples of movie-book alignment. We use our model to align a movie to a book. Then for a chosen shot (which is a node in our CRF) we show the corresponding paragraph, plus one before and one after, in the book inferred by our model. On the left we show one (central) frame from the shot along with the subtitle sentence(s) that overlap with the shot. Some dialogs in the movie closely follow the book and thus help with the alignment.
The Green Mile
The Green Mile
The Road
Figure 11: Examples of movie-book alignment. We use our model to align a movie to a book. Then for a chosen shot (which is a node in our CRF) we show the corresponding paragraph, plus one before and one after, in the book inferred by our model. On the left we show one (central) frame from the shot along with the subtitle sentence(s) that overlap with the shot. Some dialogs in the movie closely follow the book and thus help with the alignment. | 1506.06724#77 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 78 | | have your license. | know who you are. | know where you live. I'm keeping your license, and I'm going to check on you, mister Raymond K. Hessel. In three months, and then in six months, and then in a year, and if you aren't back in school on your way to being a veterinarian, you will be dead. You didn't say anything. [00:13:24:00:13:27] Two: | can only get these sheets in Santa Fe.
Your head rolled up and away from the gun, and you said, yeah. You said, yes, you lived in a basement. You had some pictures in the wallet, too. There was your mother. This was a tough one for you, you'd have to open your eyes and see the picture of Mom and Dad smiling and see the gun at the same time, but you did, and then your eyes closed and you started to cry. You were going to cool, the amazing miracle of death. One minute, you're a person, the next minute, you're an ... [00:21:25:00:21:27] It's okay. | can tell. | 1506.06724#78 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 79 | I've never been in here before tonight. âIf you say so, sir," the bartender says, âbut Thursday night, you came in to ask how soon the police were planning to shut us down." Last Thursday night, | was awake all night with the insomnia, wondering was | awake, was | sleeping. | woke up late Friday morning, bone tired and feeling | hadn't ever had my eyes closed. "Yes, sir," the bartender says, "Thursday night, you were standing right where you are now and you were asking me about the police crackdown, and you were asking me how many guys we had to turn away from the Wednesday night fight club." [00:23:44:00:23:47] You're late, honey. Oh, yes, you are. | am not late.
Figure 12: Examples of of borrowing paragraphs from other books â 10 book experiment. We show a few examples of top-scoring correspondences between a shot in a movie and a paragraph in a book that does not correspond to the movie. Note that by forcing the model to choose from another book, the top-scoring correspondences may still have a relatively low similarity. In this experiment, we did not enforce a global alignment over the full book â we use the similarity output by our contextual CNN. | 1506.06724#79 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 80 | âMy friends, thou protest too much to believe the protesting. You are all believing deep inside your stingy little hearts that our Miss Angel of Mercy Ratched is absolutely correct in every assumption she made today about McMurphy. You know she was, and so do I. But why deny it? Let's be honest and give this man his due instead of secretly criticizing his capitalistic talent. What's wrong with him making a little profit? We've all certainly got our money's worth every time he fleeced us, haven't we? He's a shrewd character with an eye out for a quick dollar. He doesn't make any pretense about his motives, does he? Why should we? He has a healthy and honest attitude about his chicanery, and I'm all for him, just as I'm for the dear old capitalistic system of free individual enterprise, comrades, for him and his downright bullheaded gall and the American flag, bless it, and the Lincoln Memorial and the whole bit. Remember the Maine, P. T. Barnum and the Fourth of July. | feel compelled to defend my friend's honor as a good old red, white, and blue hundred-per-cent American con man. Good guy, my [00:35:25:00:35:27] Do you have any witnesses or foot. McMurphy would ... fingerprints ? | 1506.06724#80 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 81 | You didn't say anything. Get out of here, and do your little life, but remember I'm watching you, Raymond Hessel, and I'd rather kill you than see you working a shit job for just enough money to buy cheese and watch television. Now, I'm going to walk away so don't turn around. [00:05:46:00:05:48] I'm warning you now, boy.
». course. She wasn't quite dead. | have often thought it would have been better - for me, if not for her - if she had been killed instantly. It might have made it possible for me to let her go a little sooner, a little more naturally. Or perhaps I'm only kidding myself about that. All | know for sure is that | have never let her go, not really. She was trembling all over. One of her shoes had come off and | could see her foot jittering. Her ... [00:16:22:00:16:26] "We have a witch in the family. Isn't it wonderful?" | 1506.06724#81 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 82 | Figure 13: Examples of of borrowing paragraphs from other books â 10 book experiment. We show a few examples of top-scoring correspondences between a shot in a movie and a paragraph in a book that does not correspond to the movie. Note that by forcing the model to choose from another book, the top-scoring correspondences may still have a relatively low similarity. In this experiment, we did not enforce a global alignment over the full book â we use the similarity output by our contextual CNN.
. ya see, the thing is..." He scratched his beard. "See, | done heard yer little twitter feet up on my ceilin' there, so | come up to do some investigatin'. Yep, that's what | reckon, far as | recall." Tick exchanged a baffled look with Sofia and Paul. It didn't take a genius to realize they'd already caught Sally in his first lie. "Well," Tick said, "we need a minute to talk about what we're gonna do." [00:55:19:00:55:23] No, no. | may need to talk to you a little futher, so how about you just let me know if you're gonna leave town. | 1506.06724#82 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 83 | . last night, or were the Tears still affecting me more than | realized? | didn't think about it again. | just turned and walked to the bathroom. A quick shower and we'd be on our way to the airport. Twenty minutes later | was ready, my hair still soaking wet. | was dressed in a pair of navy blue dress slacks, an emerald green silk blouse, and a navy suit jacket that matched the pants. Jeremy had also chosen a pair of black low-heeled pumps and included a pair of black thigh-highs. Since | didn't own any other kind of hose, that | didn't mind. But the rest of it... "Next time you pick out clothes for me to run for my life in, include some jogging shoes. Pumps, no matter how low-heeled, just aren't made for it." [01:25:28:01:25:30] - Two pair of black pants? - Yes, sir. | 1506.06724#83 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 84 | You, he wanted to say, I'm thinking of you. I'm thinking of your stink and how bad you smell and how | can't stop smelling you. I'm thinking of how you keep staring at me and how | never say anything about it and | don't know why. I'm thinking of you staring at me and why someone's screaming at me inside my head and how someone's screaming inside my head and why it seems odd that I'm not worried about that. [01:55:38:01:55:41] I'm thinking | don't know what | would do if you were gone.
Figure 14: Examples of of borrowing paragraphs from other books â 200 book experiment. We show a few examples of top-scoring correspondences between a shot in a movie and a paragraph in a book that does not correspond to the movie. By scaling up the experiment (more books to choose from), our model gets increasingly more relevant âstoriesâ. | 1506.06724#84 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 85 | "A good bodyguard doesn't relax on the job," Ethan said. âYou know we aren't a threat to Ms. Reed, Ethan. | don't know who you're supposed to be protecting her from, but it isn't us." âThey may clean up for the press, but | know what they are, Meredith," Ethan said. A [01:52:05:01:52:09] - How do you know? - Someone's going to try and steal it.
| could use, he reflected, anything that'd help, anything at all. Any hint, like from that girl, any suggestion. He felt dismal and afraid. Shit, he thought, what am | going to do? If I'm off everything, he thought, then I'll never see any of them again, any of my friends, the people | watched and knew. I'll be out of it; I'll be maybe retired the rest of my life-anyhow, I've seen the last of Arctor and Luckman and Jerry Fabin and Charles Freck and most of all Donna Hawthorne. I'll never see any of my friends again, for the rest of eternity. It's over. [00:37:32:00:37:35] ...and I'll never do it again, that's for sure. | 1506.06724#85 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 86 | He came to his knees and put his hands on my arms, and stared down into my face. "I will love you always. When this red hair is white, | will still love you. When the smooth softness of youth is replaced by the delicate softness of age, | will still want to touch your skin. When your face is full of the line of every smile you have ever smiled, of every surprise | have seen flash through your eyes, when every tear you have ever cried has left its mark upon your face, | will treasure you all the more, because | was there to see it all. | will share your life with you, Meredith, and |... [00:55:54:00:55:58] Now, once you've got hold of your broom, | want you to mount it.
Figure 15: Examples of of borrowing paragraphs from other books â 200 book experiment. We show a few examples of top-scoring correspondences between a shot in a movie and a paragraph in a book that does not correspond to the movie. By scaling up the experiment (more books to choose from), our model gets increasingly more relevant âstoriesâ. Bottom row: failed example.
# C. The CoCoBook
We show more results for captioning CoCo images [18] with passages from the books. | 1506.06724#86 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 87 | if never â somewhere you âll never ï¬nd it , â owens sneered . meant ï¬ve seconds , his claim was true . the little shit âs gaze cut left , where a laptop sat on a coffee table . trey strode to it . owens â email program was open .
seriously . wreck . just something like that . i try to convince her .
everyone was allowed to rest for the next twenty-four hours . that following evening : the elect , not their entourages , were called to a dining hall for supper with lady dolorous . a table that curved inward was laden with food and drink . the wall behind the table was windows with a view of the planet . girls in pink stood about and at attention .
he had simply ... healed . brian watched his fellow passengers come aboard . a young woman with blonde hair was walking with a little girl in dark glasses . the little girl âs hand was on the blonde âs elbow . the woman murmured to her charge , the girl looked immediately toward the sound of her voice , and brian understood she was blind - it was something in the gesture of the head . | 1506.06724#87 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 88 | this was a beautiful miniature reproduction of a real london town house , and when jessamine touched it , tessa saw that the front of it swung open on tiny hinges . tessa caught her breath . there were beautiful tiny rooms perfectly decorated with miniature furniture , everything built to scale , from the little wooden chairs with needlepoint cushions to the cast-iron stove in the kitchen . there were small dolls , too , with china heads , and real little oil paintings on the walls . â this was my house . â
if he had been nearby he would have dragged her out of the room by her hair and strangled her . during lunch break she went with a group back to the encampment . out of view of the house , under a stand of towering trees , several tents were sitting in a ï¬eld of mud . the rain the night before had washed the world , but here it had made a mess of things . a few women ï¬red up a camp stove and put on rice and lentils .
Ta? ALL ALM
then a frightened yell . â hang on ! â suddenly , jake was ï¬ying through the air . nefertiti became airborne , too . he screamed , not knowing what was happening-then he splashed into a pool of water . | 1506.06724#88 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 89 | grabbing his wristwatch off the bedside table he checked the time , grimacing when he saw that it was just after two in the afternoon . jeanne louise should nât be up yet . stiï¬ing a yawn , he slid out of bed and made his way to the en suite bathroom for a shower twenty minutes later paul was showered , dressed , and had . brushed his teeth and hair . feeling somewhat alive now , he made his way out of his and jeanne louise âs room , pausing to look in on livy as he passed .
she cried . quentin put a heavy , warm , calming hand on her thigh , saying , â he should be sober by then . â a cell phone rang . he pulled his from his back pocket , glanced at it , then used the remote to turn the tv to the channel that showed the feed from the camera at the security gate . â oh , it âs rachel . â | 1506.06724#89 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 90 | now however she was out of his shot . he had missed it completely until he had ended up on the ground with his shotgun . an old clock hung on the wall near the door . the was obviously broken , the small red hand ticking the same second away over and over again . morgan squeezed the trigger and pellets ripped out of their package , bounced down the barrel , ï¬ew through the air and ripped into the old clock tearing it in two before it smashed to the ground .
a man sat in a chair , facing the wall opposite of me . it nearly startled me when i ï¬rst saw him , and made a bit of a squeak , but he did nothing . he had dark gray hair , a black suit and pants , and a gray and blue striped tie . s-sir ? i said .
its been years since we last played together , but as i recall , he was rather weak at the net . or was it his serving ? all i know is he plays tennis much better than he plays cricket . perhaps , mr brearly , frances eventually replied , we should wait until we actually start playing . then we can ascertain our oppositions faults , and make a plan based on the new information . | 1506.06724#90 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 91 | since it was the middle of summer , there were candles in the ï¬replace instead of a ï¬re . but it still cast a romantic glow over the room . there were candles on the mantle and on a table set up in the corner with ï¬owers . as she looked around , her eyes instinctively turned to ï¬nd max who was behind a bar opening a bottle of champagne . the doors were closed quietly behind her and her mouth felt dry as she looked across the room at the man who had haunted her dreams for so long .
the open doorway of another house provided a view of an ancient game of tiles . it wasnt the game that held reddings attention . it was the four elderly people who sat around a table playing the game . they were well beyond their productive years and the canal township had probably been their whole lives . redding and lin ming stepped away from the doorway right into the path of a wooden pushcart .
along with the ï¬sh , howard had given them some other picnic treats that had spoiled ... mushrooms in cream sauce , rotted greens . the bats and temp were only eating from the river now , but the remaining picnic food was running low . there were a few loaves of stale bread , some cheese , some dried vegetables , and a couple of cakes . gregor looked over the supplies and thought about boots wailing for food and water in the jungle . it had been unbearable . | 1506.06724#91 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 92 | he felt the ï¬rst stirrings of fear mixing with his anger . a light ï¬icked on in the room and eric jerked , blinking for a minute at the brightness before the images focused . there was a tall , thin man standing over a mannequin . he looked like he was assembling it , since its leg was on the ground next to the man and its arm was in two pieces farther away . then the mannequin âs head turned .
# References
[1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine trans- lation by jointly learning to align and translate. ICLR, 2015. 4
[2] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. EMNLP, 2014. 4
[3] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014. 4
and B. Taskar. Movie/script: Alignment and parsing of video and text tran- scription. In ECCV, 2008. 2 | 1506.06724#92 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 93 | and B. Taskar. Movie/script: Alignment and parsing of video and text tran- scription. In ECCV, 2008. 2
[5] M. Everingham, J. Sivic, and A. Zisserman. âHello! My name is... Buffyâ â Automatic Naming of Characters in TV Video. BMVC, pages 899â908, 2006. 2
[6] A. Farhadi, M. Hejrati, M. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. Forsyth. Every picture tells a story: Generating sentences for images. In ECCV, 2010. 2
[7] S. Fidler, A. Sharma, and R. Urtasun. A sentence is worth a thousand pixels. In CVPR, 2013. 2
[8] A. Gupta and L. Davis. Beyond nouns: Exploiting prepo- sitions and comparative adjectives for learning visual classi- ï¬ers. In ECCV, 2008. 1
[9] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997. 4 | 1506.06724#93 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 94 | [9] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997. 4
[10] N. Kalchbrenner and P. Blunsom. Recurrent continuous translation models. In EMNLP, pages 1700â1709, 2013. 4
[11] A. Karpathy and L. Fei-Fei. Deep visual-semantic align- In CVPR, 2015. ments for generating image descriptions. 1, 2
[12] D. Kingma and J. Ba. Adam: A method for stochastic opti- mization. arXiv preprint arXiv:1412.6980, 2014. 5
[13] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural lan- guage models. CoRR, abs/1411.2539, 2014. 1, 2, 3, 5, 9, 10
[14] R. Kiros, Y. Zhu, R. Salakhutdinov, R. S. Zemel, A. Torralba, R. Urtasun, and S. Fidler. Skip-Thought Vectors. In Arxiv, 2015. 3, 4 | 1506.06724#94 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 95 | [15] C. Kong, D. Lin, M. Bansal, R. Urtasun, and S. Fidler. What are you talking about? text-to-image coreference. In CVPR, 2014. 1, 2
[16] G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. Berg, and T. Berg. Baby talk: Understanding and generating simple image descriptions. In CVPR, 2011. 2
[17] D. Lin, S. Fidler, C. Kong, and R. Urtasun. Visual Seman- tic Search: Retrieving Videos via Complex Textual Queries. CVPR, pages 2657â2664, 2014. 1, 2
[18] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ra- manan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Com- mon objects in context. In ECCV, pages 740â755. 2014. 1, 19
[19] X. Lin and D. Parikh. Donât just listen, use your imagination: In Leveraging visual common sense for non-visual tasks. CVPR, 2015. 1 | 1506.06724#95 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 96 | [20] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncer- tain input. In NIPS, 2014. 1
[21] J. Mao, W. Xu, Y. Yang, J. Wang, and A. L. Yuille. Ex- plain images with multimodal recurrent neural networks. In arXiv:1410.1090, 2014. 1, 2
[22] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efï¬cient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. 4
[23] K. Papineni, S. Roukos, T. Ward, and W. J. Zhu. BLEU: a method for automatic evaluation of machine translation. In ACL, pages 311â318, 2002. 6
[24] H. Pirsiavash, C. Vondrick, and A. Torralba. why in images. arXiv.org, jun 2014. 2 Inferring the | 1506.06724#96 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 97 | [25] V. Ramanathan, A. Joulin, P. Liang, and L. Fei-Fei. Link- ing People in Videos with âTheirâ Names Using Coreference Resolution. In ECCV, pages 95â110. 2014. 2
[26] V. Ramanathan, P. Liang, and L. Fei-Fei. Video event under- standing using natural language descriptions. In ICCV, 2013. 1
[27] A. Rohrbach, M. Rohrbach, N. Tandon, and B. Schiele. A dataset for movie description. In CVPR, 2015. 2, 5
[28] P. Sankar, C. V. Jawahar, and A. Zisserman. Subtitle-free Movie to Script Alignment. In BMVC, 2009. 2
[29] A. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Efï¬- cient Structured Prediction with Latent Variables for General Graphical Models. In ICML, 2012. 6
[30] J. Sivic, M. Everingham, and A. Zisserman. âWho are you?â - Learning person speciï¬c classiï¬ers from video. CVPR, pages 1145â1152, 2009. 2 | 1506.06724#97 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 98 | [31] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, 2014. 4
[32] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabi- novich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. 5
[33] M. Tapaswi, M. Bauml, and R. Stiefelhagen. Book2Movie: Aligning Video scenes with Book chapters. In CVPR, 2015. 2
[34] M. Tapaswi, M. Buml, and R. Stiefelhagen. Aligning Plot Synopses to Videos for Story-based Retrieval. IJMIR, 4:3â 16, 2015. 1, 2, 6
[35] S. Venugopalan, H. Xu, J. Donahue, M. Rohrbach, R. J. Mooney, and K. Saenko. Translating Videos to Natural Language Using Deep Recurrent Neural Networks. CoRR abs/1312.6229, cs.CV, 2014. 1, 2 | 1506.06724#98 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.06724 | 99 | [36] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In arXiv:1411.4555, 2014. 1, 2
[37] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhut- dinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. In arXiv:1502.03044, 2015. 2
[38] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning Deep Features for Scene Recognition using Places Database. In NIPS, 2014. 5, 7 | 1506.06724#99 | Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books | Books are a rich source of both fine-grained information, how a character, an
object or a scene looks like, as well as high-level semantics, what someone is
thinking, feeling and how these states evolve through a story. This paper aims
to align books to their movie releases in order to provide rich descriptive
explanations for visual content that go semantically far beyond the captions
available in current datasets. To align movies and books we exploit a neural
sentence embedding that is trained in an unsupervised way from a large corpus
of books, as well as a video-text neural embedding for computing similarities
between movie clips and sentences in the book. We propose a context-aware CNN
to combine information from multiple sources. We demonstrate good quantitative
performance for movie/book alignment and show several qualitative examples that
showcase the diversity of tasks our model can be used for. | http://arxiv.org/pdf/1506.06724 | Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler | cs.CV, cs.CL | null | null | cs.CV | 20150622 | 20150622 | [
{
"id": "1502.03044"
}
] |
1506.05869 | 2 | # Abstract
Conversational modeling is an important task in natural language understanding and machine in- telligence. Although previous approaches ex- ist, they are often restricted to speciï¬c domains (e.g., booking an airline ticket) and require hand- crafted rules. In this paper, we present a sim- ple approach for this task which uses the recently proposed sequence to sequence framework. Our model converses by predicting the next sentence given the previous sentence or sentences in a conversation. The strength of our model is that it can be trained end-to-end and thus requires much fewer hand-crafted rules. We ï¬nd that this straightforward model can generate simple con- versations given a large conversational training dataset. Our preliminary results suggest that, de- spite optimizing the wrong objective function, the model is able to converse well. It is able extract knowledge from both a domain speciï¬c dataset, and from a large, noisy, and general do- main dataset of movie subtitles. On a domain- speciï¬c IT helpdesk dataset, the model can ï¬nd a solution to a technical problem via conversa- tions. On a noisy open-domain movie transcript dataset, the model can perform simple forms of common sense reasoning. As expected, we also ï¬nd that the lack of consistency is a common fail- ure mode of our model.
# 1. Introduction | 1506.05869#2 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 3 | # 1. Introduction
than just mere classiï¬cation, they can be used to map com- plicated structures to other complicated structures. An ex- ample of this is the task of mapping a sequence to another sequence which has direct applications in natural language understanding (Sutskever et al., 2014). The main advan- tage of this framework is that it requires little feature en- gineering and domain speciï¬city whilst matching or sur- passing state-of-the-art results. This advance, in our opin- ion, allows researchers to work on tasks for which domain knowledge may not be readily available, or for tasks which are simply too hard to design rules manually.
Conversational modeling can directly beneï¬t from this for- mulation because it requires mapping between queries and reponses. Due to the complexity of this mapping, conver- sational modeling has previously been designed to be very narrow in domain, with a major undertaking on feature en- gineering. In this work, we experiment with the conversa- tion modeling task by casting it to a task of predicting the next sequence given the previous sequence or sequences using recurrent networks (Sutskever et al., 2014). We ï¬nd that this approach can do surprisingly well on generating ï¬uent and accurate replies to conversations. | 1506.05869#3 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 4 | We test the model on chat sessions from an IT helpdesk dataset of conversations, and ï¬nd that the model can some- times track the problem and provide a useful answer to the user. We also experiment with conversations obtained from a noisy dataset of movie subtitles, and ï¬nd that the model can hold a natural conversation and sometimes per- form simple forms of common sense reasoning. In both cases, the recurrent nets obtain better perplexity compared to the n-gram model and capture important long-range cor- relations. From a qualitative point of view, our model is sometimes able to produce natural conversations.
Advances in end-to-end training of neural networks have led to remarkable progress in many domains such as speech recognition, computer vision, and language processing. Recent work suggests that neural networks can do more
# 2. Related Work | 1506.05869#4 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 6 | used for neural machine translation and achieves im- provements on the English-French and English-German translation tasks from the WMTâ14 dataset (Luong et al., 2014; Jean et al., 2014). It has also been used for other tasks such as parsing (Vinyals et al., 2014a) and is Since it image captioning (Vinyals et al., 2014b). well known that vanilla RNNs suffer from vanish- ing gradients, most researchers use variants of Long Short Term Memory (LSTM) recurrent neural net- works (Hochreiter & Schmidhuber, 1997).
<eos> ts <â>}| ON > â â sâ} x x} >< c <eos> Context (Previous Sentences) Reply
Figure1. Using the seq2seq framework for modeling conversa- tions.
Our work is also inspired by the recent success of neu- ral language modeling (Bengio et al., 2003; Mikolov et al., 2010; Mikolov, 2012), which shows that recurrent neural networks are rather effective models for natural language. More recently, work by Sordoni et al. (Sordoni et al., 2015) and Shang et al. (Shang et al., 2015), used recurrent neural networks to model dialogue in short conversations (trained on Twitter-style chats). | 1506.05869#6 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 7 | and train to map âABCâ to âWXYZâ as shown in Figure 1 above. The hidden state of the model when it receives the end of sequence symbol â<eos>â can be viewed as the thought vector because it stores the information of the sen- tence, or thought, âABCâ.
Building bots and conversational agents has been pur- sued by many researchers over the last decades, and it is out of the scope of this paper to provide an exhaus- tive list of references. However, most of these systems require a rather complicated processing pipeline of many stages (Lester et al., 2004; Will, 2007; Jurafsky & Martin, 2009). Our work differs from conventional systems by proposing an end-to-end approach to the problem which lacks domain knowledge. It could, in principle, be com- bined with other systems to re-score a short-list of can- didate responses, but our work is based on producing an- swers given by a probabilistic model trained to maximize the probability of the answer given some context.
# 3. Model | 1506.05869#7 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 8 | # 3. Model
The strength of this model lies in its simplicity and gener- ality. We can use this model for machine translation, ques- tion/answering, and conversations without major changes in the architecture. Applying this technique to conversa- tion modeling is also straightforward: the input sequence can be the concatenation of what has been conversed so far (the context), and the output sequence is the reply.
Unlike easier tasks like translation, however, a model like sequence-to-sequence will not be able to successfully âsolveâ the problem of modeling dialogue due to sev- eral obvious simpliï¬cations: the objective function being optimized does not capture the actual objective achieved through human communication, which is typically longer term and based on exchange of information rather than next step prediction. The lack of a model to ensure consistency and general world knowledge is another obvious limitation of a purely unsupervised model. | 1506.05869#8 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 9 | Our approach makes use of the sequence-to-sequence (seq2seq) framework described in (Sutskever et al., 2014). The model is based on a recurrent neural network which reads the input sequence one token at a time, and predicts the output sequence, also one token at a time. During train- ing, the true output sequence is given to the model, so learn- ing can be done by backpropagation. The model is trained to maximize the cross entropy of the correct sequence given its context. During inference, given that the true output se- quence is not observed, we simply feed the predicted output token as input to predict the next output. This is a âgreedyâ inference approach. A less greedy approach would be to use beam search, and feed several candidates at the previ- ous step to the next step. The predicted sequence can be selected based on the probability of the sequence.
Concretely, suppose that we observe a conversation with two turns: the ï¬rst person utters âABCâ, and second person replies âWXYZâ. We can use a recurrent neural network,
# 4. Datasets
In our experiments we used two datasets: a closed-domain IT helpdesk troubleshooting dataset and an open-domain movie transcript dataset. The details of the two datasets are as follows.
# 4.1. IT Helpdesk Troubleshooting dataset | 1506.05869#9 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 10 | # 4.1. IT Helpdesk Troubleshooting dataset
In our ï¬rst set of experiments, we used a dataset which was extracted from a IT helpdesk troubleshooting chat service. In this service, costumers face computer related issues, and a specialist help them by conversing and walking through a solution. Typical interactions (or threads) are 400 words long, and turn taking is clearly signaled. Our training set contains 30M tokens, and 3M tokens were used as valida- tion. Some amount of clean up was performed, such as removing common names, numbers, and full URLs.
A Neural Conversational Model | 1506.05869#10 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 11 | # 4.2. OpenSubtitles dataset
also tested our model on the OpenSubtitles We This dataset consists of dataset (Tiedemann, 2009). movie conversations in XML format. It contains sen- tences uttered by characters in movies. We applied a simple processing step removing XML tags and obvi- ous non-conversational text (e.g., hyperlinks) from the dataset. As turn taking is not clearly indicated, we treated consecutive sentences assuming they were uttered by different characters. We trained our model to predict the next sentence given the previous one, and we did this for every sentence (noting that this doubles our dataset size, as each sentence is used both for context and as target). Our training and validation split has 62M sentences (923M tokens) as training examples, and the validation set has 26M sentences (395M tokens). The split is done in such a way that each sentence in a pair of sentences either appear together in the training set or test set but not both. Unlike the previous dataset, the OpenSubtitles is quite large, and rather noisy because consecutive sentences may be uttered by the same character. Given the broad scope of movies, this is an open-domain conversation dataset, contrasting with the technical troubleshooting dataset.
# 5. Experiments | 1506.05869#11 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 12 | # 5. Experiments
In this section, we describe the experimental results with the two datasets and show some samples of the interac- tions with the system that we trained. We also compare the performance of our system against a popular rule-based bot (CleverBot1) using human evaluations on a set of 200 questions.
# 5.1. IT Helpdesk Troubleshooting experiments
In this experiment, we trained a single layer LSTM with 1024 memory cells using stochastic gradient descent with gradient clipping. The vocabulary consists of the most common 20K words, which includes special tokens indi- cating turn taking and actor. | 1506.05869#12 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 13 | Machine: hi Human: hello Machine: could you please let me know what are the operating systems you are using ? Human: i am using linux Machine: ok it was in the ssl vpn on which you stated it was broken Machine: please check if you are logged in to corp network Machine: what is the error that you are running please Human: i am seeing an error related to vpn Machine: what is the error message that you are getting when connecting to vpn using network connect ? Human: connection refused or something like that Machine: may i know the version of network connect you connect ? Human: i am not sure i know that Machine: is the network connect prompting for any user certiï¬cate Machine: ? Human: yes Machine: are you at home or at work at the mo- ment ? Machine: ofï¬ce ? Human: i am at home Machine: try this Machine: goto <URL> page and enter your name and password Human: done Machine: check if its working ï¬ne now Human: yes , now it works ! Machine: great Machine: anything else that i can help Human: no , that was very useful Machine: you are other than <NAME> : ) Machine: have a nice night | 1506.05869#13 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 14 | At convergence, this model achieved a perplexity of 8, whereas an n-gram model achieved 18. Below is a few sam- ples of simulated troubleshooting sessions for most three of the most common IT problems (remote access, software crashes, and password issues). In these conversations, Ma- chine is our Neural Conversational Model, and Human the human actor interacting with it.
Conversation 1: VPN issues.
Describe your problem: i am having issues ac- cessing vpn
1http://www.cleverbot.com/
(The <URL> indeed contains information about vpn ac- cess.)
Conversation 2: Browser issues.
Describe your problem: browser keeps crashing Machine: hi Human: hi Machine: could you please let me know what is the operating system you are using ? Human: linux Machine: ok this is the issue with the system per- formance , nasser , tools . Human: what is that ?
A Neural Conversational Model | 1506.05869#14 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 15 | Machine: is it a windows machine ? Machine: we have linux on your machine . Machine: is it windows xp or windows 7 ? Human: i am using linux Machine: let me take a remote of your machine and check if issue has been pushed properly . Human: ok Machine: please open terminal and run the be- low commands in one computer : Machine: sudo apt - get update Machine: sudo apt - get upgrade Machine: sudo <COMMAND> - updater Human: done Machine: let me try connecting now Human: ok Machine: its done Machine: should use the second or hit run on the background Human: done Machine: try signing out now and follow the prompts as per the message given above Machine: shd i reboot the machine . Human: ok , please go ahead Machine: once its completed please try to login . Human: sure Machine: please ping me back once you have tried to login Machine: are we still connected Human: i now logged in Machine: thanks for your conï¬rmation Human: what should i do now ? Machine: wait for 1 5 minutes to complete the login policy Machine: and then try logging in Human: ok Machine: is there anything else to help you ? Human: let my try browsing a little bit and if it keeps crashing i will let you know Machine: sure will do Machine: seems the mail | 1506.05869#15 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 17 | Machine: ok great Human: can you tell me how i can recover it ? Machine: i â ll need to verify who it is at that machine Machine: can we do a hangout Machine: ? Human: yes Machine: <anonymized hangout instructions and veriï¬cations> Human: cool , i am good now Machine: how â s this looking good ? Machine: i â ll go ahead and close this chat . Machine: take care . : ) Human: bye Machine: bye .
# 5.2. OpenSubtitles experiments
In this experiment, we trained a two-layered LSTM using AdaGrad with gradient clipping. Each layer of the LSTM has 4096 memory cells, and we built a vocabulary consist- ing of the most frequent 100K words. To speed up the soft- max, we project the memory cells to 2048 linear units be- fore feeding the information to the classiï¬er.
At convergence the perplexity of the recurrent model on the validation set is 17. Our smoothed 5-gram model achieves a perplexity of 28. Interestingly, adding the soft atten- tion mechanism of (Bahdanau et al., 2014) did not signiï¬- cantly improve the perplexity on neither training or valida- tion sets.
# 5.2.1. MODEL SAMPLES | 1506.05869#17 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 18 | # 5.2.1. MODEL SAMPLES
In addition to the perplexity measure, our simple recur- rent model does often produce plausible answers. Here are a few sample question-answer pairs that allowed us to understand the capabilities of the model when trained on the noisy but large dataset of OpenSubtitles from movies. Again, in these conversations, Machine is our Neural Con- versational Model, and Human the human actor interacting with it.
Conversation 1: Basic
Conversation 3: Password issues.
Describe your problem: i forgot my password Machine: hi , this is <NAME> from helpdesk connect , what can i help you with today ? Human: hi , i forgot my password
Human: hello ! Machine: hello ! Human: how are you ? Machine: i âm good . Human: what âs your name ? Machine: i âm julia . Human: when were you born ? Machine: july 20th . Human: what year were you born ? Machine: 1977 . Human: where are you from ?
A Neural Conversational Model | 1506.05869#18 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 19 | Machine: i âm out in the boonies. Human: are you a man or a woman ? Machine: i âm a woman . Human: why are we here ? Machine: i âm not sure . Human: okay, bye ! Machine: bye . Human: see you later . Machine: bye .
Conversation 2: Simple Q&A (contexts and multiple choice)
Human: my name is david . what is my name ? Machine: david . Human: my name is john . what is my name ? Machine: john . Human: are you a leader or a follower ? Machine: i âm a leader . Human: are you a follower or a leader ? Machine: i âm a leader .
Human: how much is ten minus two ? Machine: seventy - two .
Conversation 4: Philosophical Q&A | 1506.05869#19 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 20 | Human: how much is ten minus two ? Machine: seventy - two .
Conversation 4: Philosophical Q&A
Human: what is the purpose of life ? Machine: to serve the greater good . Human: what is the purpose of living ? Machine: to live forever . Human: what is the purpose of existence ? Machine: to ï¬nd out what happens when we get to the planet earth . Human: where are you now ? Machine: i âm in the middle of nowhere . Human: what is the purpose of dying ? Machine: to have a life . Human: what is the purpose of being intelligent ? Machine: to ï¬nd out what it is . Human: what is the purpose of emotions ? Machine: i don ât know .
Conversation 3: General knowledge Q&A
Conversation 5: Morality | 1506.05869#20 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 21 | Conversation 3: General knowledge Q&A
Conversation 5: Morality
Human: who is skywalker ? Machine: he is a hero . Human: who is bill clinton ? Machine: he âs a billionaire . Human: is sky blue or black ? Machine: blue . Human: does a cat have a tail ? Machine: yes . Human: does a cat have a wing ? Machine: no Human: can a cat ï¬y ? Machine: no . Human: how many legs does a cat have ? Machine: four , i think . Human: how many legs does a spider have ? Machine: three , i think . Human: how many legs does a centipede have ? Machine: eight . Human: what is the color of the sky ? Machine: blue . Human: what is the color of water ? Machine: water . Human: what is the color of blood ? Machine: it is the same as a black eye . Human: what is the usual color of a leaf ? Machine: it is a green one . Human: what is the color of a yellow car ? Machine: yellow . Human: how much is two plus two ? Machine: four . | 1506.05869#21 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 22 | Human: what is moral ? Machine: what empowered humanity , what in- tellectual the essence is . Human: what is immoral ? Machine: the fact that you have a child . Human: what is morality ? Machine: what is altruism ? Human: what is the deï¬nition of altruism ? Machine: if you don ât believe in god , then you don ât know . Human: ok ... so what is the deï¬nition of moral- ity ? Machine: well , the truth is , you âre not a be- liever in god almighty . Human: tell me the deï¬nition of morality , i am quite upset now ! Machine: i âm not ashamed of being a philoso- pher ! Human: tell me the deï¬nition of morality . Machine: i don ât have ethics . Human: look , i need help , i need to know more about morality ... Machine: i don ât know what ethics is . Human: seriously , what is morality ? Machine: what is the deï¬nition of living ? Human: why living has anything to do with morality ? Machine: you âre not a cop . Human: okay , i need to know how should i be- have morally ...
A Neural Conversational Model | 1506.05869#22 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 23 | Machine: i don ât know how to tell you . Human: is morality and ethics the same ? Machine: yes , sir . Human: what are the things that i do to be im- moral ? Machine: i don ât know . Human: give me some examples of moral ac- tions ... Machine: i âm not a moralist . Human: alright , morality ? Machine: integrity . Human: what is integrity ? Machine: i âm sorry , i don ât know what else to say . Human: be moral ! Machine: be a man ! Human: i really like our discussion on morality and ethics ... Machine: and how i âm not in the mood for a philosophical debate . Human: what do you like to talk about ? Machine: nothing .
Conversation 6: Opinions (notice the model outputs âheâ , âsheâ or âitâ depending on the subject being asked) | 1506.05869#23 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 24 | Conversation 6: Opinions (notice the model outputs âheâ , âsheâ or âitâ depending on the subject being asked)
Human: what do you think about tesla ? Machine: he âs a good conductor . Human: what do you think about bill gates ? Machine: he âs a good man . Human: What do you think about messi ? Machine: he âs a great player . Human: what do you think about cleopatra ? Machine: oh , she âs very regal . Human: what do you think about england dur- ing the reign of elizabeth ? Machine: it was a great place .
We ï¬nd it encouraging that the model can remember facts, understand contexts, perform common sense reasoning without the complexity in traditional pipelines. What sur- prises us is that the model does so without any explicit knowledge representation component except for the param- eters in the word vectors.
Perhaps most practically signiï¬cant is the fact that the model can generalize to new questions. In other words, it does not simply look up for an answer by matching the question with the existing database. In fact, most of the questions presented above, except for the ï¬rst conversation, do not appear in the training set. | 1506.05869#24 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 25 | problematic drawback is that the model does not capture a consistent personality. Indeed, if we ask not identical but semantically similar questions, the answers can sometimes be inconsistent. This is expected due to the simplicity of our model and the dataset in our experiments. The conver- sation below illustrates such failure:
Conversation 7: Job and Personality
Human: what is your job ? Machine: i âm a lawyer . Human: what do you do ? Machine: i âm a doctor .
Lastly, as a comparison of other traditional bots that are rule-based, we provide the output of CleverBot for some of the questions in our test:2 | 1506.05869#25 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 26 | Human: Are you a follower or a leader ? CleverBot: No !! Human: My name is david . what is my name ? CleverBot: How old are you David ? Human: My name is john . what is my name ? CleverBot: John Human: What is the purpose of being intelligent ? CleverBot: The purpose of life is to love God and love people. Human: Is sky blue or black ? CleverBot: I am 200 years old. Human: What is the color of a yellow car ? CleverBot: Brown. Human: What is the usual color of a leaf ? CleverBot: What is the color of the apple in the fridge ? Human: What is the purpose of life ? CleverBot: My purpose it to forward my species, in other words to make it easier for future gener- ations of mankind to live. Human: What do you think about tesla ? CleverBot: I am tesla. Human: What do you think about cleopatra ? CleverBot: Cool is cool. Human: What do you think about england dur- ing the reign of elizabeth ? CleverBot: I like him. Do you ? Human: What do you think about bill gates ? CleverBot: Lets talk about Link. Human: What is moral ? CleverBot: Morality is the quality of being in accord with standards of right or good conduct. Human: What is immoral ? | 1506.05869#26 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 29 | CleverBot: What is the objective of sistetic life ? Human: Does a cat have a wing ? CleverBot: What is the name of your husband. Human: Can a cat ï¬y ? CleverBot:: What is your husbandâs name ? Human: Who is skywalker ? CleverBot: There are two of them, Anakin Sky- walker and Luke Skywalker, both characters from star Wars. Human: How many legs does a cat have ? CleverBot: An ant. Human: How many legs does a centipede have ? CleverBot: A centipedeâs number of legs varies from thirty to two hundred. Human: What is your job ? CleverBot: I work in mental health. Human: What do you do ? CleverBot: I have no job.
Human: Whatâs thirty plus forty ? NCM: Sixty ? CB: What should we talk about ?
We however believe that it remains an open research prob- lem of designing a good metric to quickly measure the quality of a conversational model. We did so by manual inspection, by using Mechanical Turk to get human evalu- ations comparing with a rule-based bot, and by computing the perplexity. However, there are obvious drawbacks on using those, and we are actively pursuing this and other re- search directions.
# 6. Discussion
5.2.2. HUMAN EVALUATION | 1506.05869#29 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 30 | # 6. Discussion
5.2.2. HUMAN EVALUATION
In order to fairly and objectively compare our model against CleverBot, we picked 200 questions3, and asked four different humans to rate our model (NCM) versus Cle- verBot (CB). Human judges were asked to pick which of the two bots they preferred, and were also allowed to de- clare a tie if both answers were of equal quality.
In this paper, we show that a simple language model based on the seq2seq framework can be used to train a conversa- tional engine. Our modest results show that it can gener- ate simple and basic conversations, and extract knowledge from a noisy but open-domain dataset. Even though the model has obvious limitations, it is surprising to us that a purely data driven approach without any rules can produce rather proper answers to many types of questions. How- ever, the model may require substantial modiï¬cations to be able to deliver realistic conversations. Amongst the many limitations, the lack of a coherent personality makes it dif- ï¬cult for our system to pass the Turing test (Turing, 1950).
If three out of four human judges agreed, we record that score. Our model was preferred in 97 out of 200 questions, whereas CleverBot was picked in 60 out of 200. There was a tie in 20 questions, and in 23 questions the judges were in disagreement. | 1506.05869#30 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 31 | Here are some samples of how the judges evaluated the sys- tems (in bold we show their preferred answer, the third ex- ample shows a tie, and the fourth examples shows diagree- ment):
ACKNOWLEDGMENTS
We thank Greg Corrado, Andrew Dai, Jeff Dean, Tom Dean, Matthieu Devin, Rajat Monga, Mike Schuster, Noam Shazeer, Ilya Sutskever and the Google Brain team for the help with the project.
# References
Human: What is the deepest spot on the world ? NCM: The paciï¬c ocean. CB: What ?
Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Human: Do you like Mexican food or Indian food ? NCM: I hate it. CB: I love Mexican food.
Bengio, Y., Ducharme, R., Vincent, P., and Janvin, C. A neural probabilistic language model. The Journal of Ma- chine Learning Research, 3:1137â1155, 2003.
Hochreiter, S. and Schmidhuber, J. Long short-term mem- ory. Neural Computation, 1997. | 1506.05869#31 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 32 | Hochreiter, S. and Schmidhuber, J. Long short-term mem- ory. Neural Computation, 1997.
Human: Who are you crazy about ? NCM: Youâre the one whoâs crazy. CB: Who are you ?
Jean, S., Cho, K., Memisevic, R., and Bengio, Y. On using very large target vocabulary for neural machine transla- tion. CoRR, abs/1412.2007, 2014.
together with in http://ai.stanford.edu/Ëquocle/QAresults.pdf
Jurafsky, D. and Martin, J. Speech and language process- ing. Pearson International, 2009.
A Neural Conversational Model | 1506.05869#32 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 33 | Kalchbrenner, N. and Blunsom, P. Recurrent continuous translation models. In EMNLP, 2013.
Lester, J., Branting, K., and Mott, B. Conversational agents. In Handbook of Internet Computing. Chapman & Hall, 2004.
Luong, T., Sutskever, I., Le, Q. V., Vinyals, O., and Zaremba, W. Addressing the rare word problem in neu- ral machine translation. arXiv preprint arXiv:1410.8206, 2014.
Mikolov, T. Statistical Language Models based on Neural Networks. PhD thesis, Brno University of Technology, 2012.
Mikolov, T., Karaï¬Â´at, M., Burget, L., Cernock`y, J., and Khudanpur, S. Recurrent neural network based language model. In INTERSPEECH, pp. 1045â1048, 2010.
Shang, L., Lu, Z., and Li, H. Neural responding ma- chine for short-text conversation. In Proceedings of ACL, 2015. | 1506.05869#33 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 34 | Shang, L., Lu, Z., and Li, H. Neural responding ma- chine for short-text conversation. In Proceedings of ACL, 2015.
Sordoni, A., Galley, M., Auli, M., Brockett, C., Ji, Y., Mitchell, M., Gao, J., Dolan, B., and Nie, J.-Y. A neural network approach to context-sensitive generation of con- versational responses. In Proceedings of NAACL, 2015.
Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to se- quence learning with neural networks. In NIPS, 2014.
Tiedemann, J. News from OPUS - A collection of multi- lingual parallel corpora with tools and interfaces. In Ni- colov, N., Bontcheva, K., Angelova, G., and Mitkov, R. (eds.), Recent Advances in Natural Language Process- ing, volume V, pp. 237â248. John Benjamins, Amster- dam/Philadelphia, Borovets, Bulgaria, 2009. ISBN 978 90 272 4825 1.
Turing, A. M. Computing machinery and intelligence. Mind, pp. 433â460, 1950. | 1506.05869#34 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.05869 | 35 | Turing, A. M. Computing machinery and intelligence. Mind, pp. 433â460, 1950.
Vinyals, O., Kaiser, L., Koo, T., Petrov, S., Sutskever, I., and Hinton, G. Grammar as a foreign language. arXiv preprint arXiv:1412.7449, 2014a.
Vinyals, O., Toshev, A., Bengio, S., and Erhan, D. Show arXiv and tell: A neural image caption generator. preprint arXiv:1411.4555, 2014b.
Will, T. Creating a Dynamic Speech Dialogue. VDM Ver- lag Dr, 2007. | 1506.05869#35 | A Neural Conversational Model | Conversational modeling is an important task in natural language
understanding and machine intelligence. Although previous approaches exist,
they are often restricted to specific domains (e.g., booking an airline ticket)
and require hand-crafted rules. In this paper, we present a simple approach for
this task which uses the recently proposed sequence to sequence framework. Our
model converses by predicting the next sentence given the previous sentence or
sentences in a conversation. The strength of our model is that it can be
trained end-to-end and thus requires much fewer hand-crafted rules. We find
that this straightforward model can generate simple conversations given a large
conversational training dataset. Our preliminary results suggest that, despite
optimizing the wrong objective function, the model is able to converse well. It
is able extract knowledge from both a domain specific dataset, and from a
large, noisy, and general domain dataset of movie subtitles. On a
domain-specific IT helpdesk dataset, the model can find a solution to a
technical problem via conversations. On a noisy open-domain movie transcript
dataset, the model can perform simple forms of common sense reasoning. As
expected, we also find that the lack of consistency is a common failure mode of
our model. | http://arxiv.org/pdf/1506.05869 | Oriol Vinyals, Quoc Le | cs.CL | ICML Deep Learning Workshop 2015 | null | cs.CL | 20150619 | 20150722 | [] |
1506.02438 | 0 | 8 1 0 2
t c O 0 2 ] G L . s c [
6 v 8 3 4 2 0 . 6 0 5 1 : v i X r a
Published as a conference paper at ICLR 2016
# HIGH-DIMENSIONAL CONTINUOUS CONTROL USING GENERALIZED ADVANTAGE ESTIMATION
John Schulman, Philipp Moritz, Sergey Levine, Michael I. Jordan and Pieter Abbeel Department of Electrical Engineering and Computer Science University of California, Berkeley {joschu,pcmoritz,levine,jordan,pabbeel}@eecs.berkeley.edu
# ABSTRACT | 1506.02438#0 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 1 | # ABSTRACT
Policy gradient methods are an appealing approach in reinforcement learning be- cause they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difï¬- culty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the ï¬rst challenge by using value functions to substan- tially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(λ). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomo- tion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy repre- sentations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experi- ence required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.
# INTRODUCTION | 1506.02438#1 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 1 | # John Tran NVIDIA [email protected]
William J. Dally Stanford University NVIDIA [email protected]
# Abstract
Neural networks are both computationally intensive and memory intensive, making them difï¬cult to deploy on embedded systems. Also, conventional networks ï¬x the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unim- portant connections. Finally, we retrain the network to ï¬ne tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9Ã, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13Ã, from 138 million to 10.3 million, again with no loss of accuracy.
# Introduction | 1506.02626#1 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 2 | # INTRODUCTION
The typical problem formulation in reinforcement learning is to maximize the expected total reward of a policy. A key source of difï¬culty is the long time delay between actions and their positive or negative effect on rewards; this issue is called the credit assignment problem in the reinforcement learning literature (Minsky, 1961; Sutton & Barto, 1998), and the distal reward problem in the behavioral literature (Hull, 1943). Value functions offer an elegant solution to the credit assignment problemâthey allow us to estimate the goodness of an action before the delayed reward arrives. Reinforcement learning algorithms make use of value functions in a variety of different ways; this paper considers algorithms that optimize a parameterized policy and use value functions to help estimate how the policy should be improved. | 1506.02438#2 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 2 | # Introduction
Neural networks have become ubiquitous in applications ranging from computer vision [1] to speech recognition [2] and natural language processing [3]. We consider convolutional neural networks used for computer vision tasks which have grown over time. In 1998 Lecun et al. designed a CNN model LeNet-5 with less than 1M parameters to classify handwritten digits [4], while in 2012, Krizhevsky et al. [1] won the ImageNet competition with 60M parameters. Deepface classiï¬ed human faces with 120M parameters [5], and Coates et al. [6] scaled up a network to 10B parameters. | 1506.02626#2 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 3 | When using a parameterized stochastic policy, it is possible to obtain an unbiased estimate of the gradient of the expected total returns (Williams, 1992; Sutton et al., 1999; Baxter & Bartlett, 2000); these noisy gradient estimates can be used in a stochastic gradient ascent algorithm. Unfortunately, the variance of the gradient estimator scales unfavorably with the time horizon, since the effect of an action is confounded with the effects of past and future actions. Another class of policy gradient algorithms, called actor-critic methods, use a value function rather than the empirical returns, ob- taining an estimator with lower variance at the cost of introducing bias (Konda & Tsitsiklis, 2003; Hafner & Riedmiller, 2011). But while high variance necessitates using more samples, bias is more perniciousâeven with an unlimited number of samples, bias can cause the algorithm to fail to con- verge, or to converge to a poor solution that is not even a local optimum.
We propose a family of policy gradient estimators that signiï¬cantly reduce variance while main- taining a tolerable level of bias. We call this estimation scheme, parameterized by γ â [0, 1] and
1
Published as a conference paper at ICLR 2016 | 1506.02438#3 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 3 | While these large neural networks are very powerful, their size consumes considerable storage, memory bandwidth, and computational resources. For embedded mobile applications, these resource demands become prohibitive. Figure 1 shows the energy cost of basic arithmetic and memory operations in a 45nm CMOS process. From this data we see the energy per connection is dominated by memory access and ranges from 5pJ for 32 bit coefï¬cients in on-chip SRAM to 640pJ for 32bit coefï¬cients in off-chip DRAM [7]. Large networks do not ï¬t in on-chip storage and hence require the more costly DRAM accesses. Running a 1 billion connection neural network, for example, at 20Hz would require (20Hz)(1G)(640pJ) = 12.8W just for DRAM access - well beyond the power envelope of a typical mobile device. Our goal in pruning networks is to reduce the energy required to run such large networks so they can run in real time on mobile devices. The model size reduction from pruning also facilitates storage and transmission of mobile applications incorporating DNNs.
1
Relative Energy Cost | 1506.02626#3 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 4 | 1
Published as a conference paper at ICLR 2016
λ â [0, 1], the generalized advantage estimator (GAE). Related methods have been proposed in the context of online actor-critic methods (Kimura & Kobayashi, 1998; Wawrzy´nski, 2009). We provide a more general analysis, which is applicable in both the online and batch settings, and discuss an in- terpretation of our method as an instance of reward shaping (Ng et al., 1999), where the approximate value function is used to shape the reward.
We present experimental results on a number of highly challenging 3D locomotion tasks, where we show that our approach can learn complex gaits using high-dimensional, general purpose neural network function approximators for both the policy and the value function, each with over 104 parameters. The policies perform torque-level control of simulated 3D robots with up to 33 state dimensions and 10 actuators.
The contributions of this paper are summarized as follows: | 1506.02438#4 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 4 | 1
Relative Energy Cost
Operation Energy [pJ] Relative Cost 32 bit int ADD 32 bit ï¬oat ADD 32 bit Register File 32 bit int MULT 32 bit ï¬oat MULT 32 bit SRAM Cache 32 bit DRAM Memory 0.1 0.9 1 3.1 3.7 5 640 1 9 10 31 37 50 6400
1 10 100 1000
= 10000
Figure 1: Energy table for 45nm CMOS process [7]. Memory access is 3 orders of magnitude more energy expensive than simple arithmetic.
To achieve this goal, we present a method to prune network connections in a manner that preserves the original accuracy. After an initial training phase, we remove all connections whose weight is lower than a threshold. This pruning converts a dense, fully-connected layer to a sparse layer. This ï¬rst phase learns the topology of the networks â learning which connections are important and removing the unimportant connections. We then retrain the sparse network so the remaining connections can compensate for the connections that have been removed. The phases of pruning and retraining may be repeated iteratively to further reduce network complexity. In effect, this training process learns the network connectivity in addition to the weights - much as in the mammalian brain [8][9], where synapses are created in the ï¬rst few months of a childâs development, followed by gradual pruning of little-used connections, falling to typical adult values. | 1506.02626#4 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 5 | The contributions of this paper are summarized as follows:
1. We provide justiï¬cation and intuition for an effective variance reduction scheme for policy gra- dients, which we call generalized advantage estimation (GAE). While the formula has been pro- posed in prior work (Kimura & Kobayashi, 1998; Wawrzy´nski, 2009), our analysis is novel and enables GAE to be applied with a more general set of algorithms, including the batch trust-region algorithm we use for our experiments.
2. We propose the use of a trust region optimization method for the value function, which we ï¬nd is a robust and efï¬cient way to train neural network value functions with thousands of parameters. 3. By combining (1) and (2) above, we obtain an algorithm that empirically is effective at learning neural network policies for challenging control tasks. The results extend the state of the art in using reinforcement learning for high-dimensional continuous control. Videos are available at https://sites.google.com/site/gaepapersupp.
# 2 PRELIMINARIES | 1506.02438#5 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 5 | # 2 Related Work
Neural networks are typically over-parameterized, and there is signiï¬cant redundancy for deep learn- ing models [10]. This results in a waste of both computation and memory. There have been various proposals to remove the redundancy: Vanhoucke et al. [11] explored a ï¬xed-point implementation with 8-bit integer (vs 32-bit ï¬oating point) activations. Denton et al. [12] exploited the linear structure of the neural network by ï¬nding an appropriate low-rank approximation of the parameters and keeping the accuracy within 1% of the original model. With similar accuracy loss, Gong et al. [13] compressed deep convnets using vector quantization. These approximation and quantization techniques are orthogonal to network pruning, and they can be used together to obtain further gains [14]. | 1506.02626#5 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 6 | # 2 PRELIMINARIES
We consider an undiscounted formulation of the policy optimization problem. The initial state 80 is sampled from distribution po. A trajectory (so, a0, $1,41,...) is generated by sampling ac- tions according to the policy a; ~ 7(a; | s,) and sampling the states according to the dynamics Stn © P(S141 | Sz, 4), until a terminal (absorbing) state is reached. A reward r, = =I (St, a Si+1) is received at each timestep. The goal is to maximize the expected total reward )7?° 9 rz, which is assumed to be finite for all policies. Note that we are not using a discount as part of the problem spec- ification; it will appear below as an algorithm parameter that adjusts a bias-variance tradeoff. But the discounted problem (maximizing ran y'r,) can be handled as an instance of the undiscounted problem in which we absorb the discount factor into the reward function, making it time-dependent.
Policy gradient methods maximize the expected total reward by repeatedly estimating the gradient g:= VoE Dean r;]. There are several different related expressions for the policy gradient, which have the form
=E So WiVo log 7o(ae | 52) ; dd) t=0
where Ψt may be one of the following: | 1506.02438#6 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 6 | There have been other attempts to reduce the number of parameters of neural networks by replacing the fully connected layer with global average pooling. The Network in Network architecture [15] and GoogLenet [16] achieves state-of-the-art results on several benchmarks by adopting this idea. However, transfer learning, i.e. reusing features learned on the ImageNet dataset and applying them to new tasks by only ï¬ne-tuning the fully connected layers, is more difï¬cult with this approach. This problem is noted by Szegedy et al. [16] and motivates them to add a linear layer on the top of their networks to enable transfer learning.
Network pruning has been used both to reduce network complexity and to reduce over-ï¬tting. An early approach to pruning was biased weight decay [17]. Optimal Brain Damage [18] and Optimal Brain Surgeon [19] prune networks to reduce the number of connections based on the Hessian of the loss function and suggest that such pruning is more accurate than magnitude-based pruning such as weight decay. However, second order derivative needs additional computation. | 1506.02626#6 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 7 | =E So WiVo log 7o(ae | 52) ; dd) t=0
where Ψt may be one of the following:
1. P29 Te: total reward of the trajectory. 4. Qâ¢(s;, a4): state-action value function. 2. OP, rv: reward following action ay. 5. Aâ¢(s,,a;): advantage function. 3. Py rv â b(se): baselined version of previous formula. 6. re + V"(8141) â V7 (s¢): TD residual.
# The latter formulas use the definitions
# » ru |
V Ï(st) := Est+1:â, at:â
# rt+l
l=0
AÏ(st, at) := QÏ(st, at) â V Ï(st),
Q* (st, at) = Eesitticos >» ru (2) 1=0
# (Advantage function).
2
(3)
Published as a conference paper at ICLR 2016 | 1506.02438#7 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 7 | HashedNets [20] is a recent technique to reduce model sizes by using a hash function to randomly group connection weights into hash buckets, so that all connections within the same hash bucket share a single parameter value. This technique may beneï¬t from pruning. As pointed out in Shi et al. [21] and Weinberger et al. [22], sparsity will minimize hash collision making feature hashing even more effective. HashedNets may be used together with pruning to give even better parameter savings.
2
Train Connectivity wu Prune Connections we Train Weights
before pruning after pruning pruning synapses --> pruning neurons
Figure 2: Three-Step Training Pipeline.
Figure 3: Synapses and neurons before and after pruning.
# 3 Learning Connections in Addition to Weights
Our pruning method employs a three-step process, as illustrated in Figure 2, which begins by learning the connectivity via normal network training. Unlike conventional training, however, we are not learning the ï¬nal values of the weights, but rather we are learning which connections are important. | 1506.02626#7 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 8 | Here, the subscript of E enumerates the variables being integrated over, where states and actions are sampled sequentially from the dynamics model P (st+1 | st, at) and policy Ï(at | st), respectively. The colon notation a : b refers to the inclusive range (a, a + 1, . . . , b). These formulas are well known and straightforward to obtain; they follow directly from Proposition 1, which will be stated shortly. The choice Ψt = AÏ(st, at) yields almost the lowest possible variance, though in practice, the advantage function is not known and must be estimated. This statement can be intuitively justiï¬ed by the following interpretation of the policy gradient: that a step in the policy gradient direction should increase the probability of better-than-average actions and decrease the probability of worse-than- average actions. The advantage function, by itâs deï¬nition AÏ(s, a) = QÏ(s, a) â V Ï(s), measures whether or not the action is better or worse than the policyâs default behavior. Hence, we should choose Ψt to be the advantage function AÏ(st, at), so that the gradient term | 1506.02438#8 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 8 | The second step is to prune the low-weight connections. All connections with weights below a threshold are removed from the network â converting a dense network into a sparse network, as shown in Figure 3. The ï¬nal step retrains the network to learn the ï¬nal weights for the remaining sparse connections. This step is critical. If the pruned network is used without retraining, accuracy is signiï¬cantly impacted.
# 3.1 Regularization
Choosing the correct regularization impacts the performance of pruning and retraining. L1 regulariza- tion penalizes non-zero parameters resulting in more parameters near zero. This gives better accuracy after pruning, but before retraining. However, the remaining connections are not as good as with L2 regularization, resulting in lower accuracy after retraining. Overall, L2 regularization gives the best pruning results. This is further discussed in experiment section.
# 3.2 Dropout Ratio Adjustment | 1506.02626#8 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02626 | 9 | # 3.2 Dropout Ratio Adjustment
Dropout [23] is widely used to prevent over-ï¬tting, and this also applies to retraining. During retraining, however, the dropout ratio must be adjusted to account for the change in model capacity. In dropout, each parameter is probabilistically dropped during training, but will come back during inference. In pruning, parameters are dropped forever after pruning and have no chance to come back during both training and inference. As the parameters get sparse, the classiï¬er will select the most informative predictors and thus have much less prediction variance, which reduces over-ï¬tting. As pruning already reduced model capacity, the retraining dropout ratio should be smaller.
Quantitatively, let Ci be the number of connections in layer i, Cio for the original network, Cir for the network after retraining, Ni be the number of neurons in layer i. Since dropout works on neurons, and Ci varies quadratically with Ni, according to Equation 1 thus the dropout ratio after pruning the parameters should follow Equation 2, where Do represent the original dropout rate, Dr represent the dropout rate during retraining.
Ci = NiNiâ1 (1) Dr = Do (2)
# 3.3 Local Pruning and Parameter Co-adaptation | 1506.02626#9 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 10 | We will introduce a parameter γ that allows us to reduce variance by downweighting rewards cor- responding to delayed effects, at the cost of introducing bias. This parameter corresponds to the discount factor used in discounted formulations of MDPs, but we treat it as a variance reduction parameter in an undiscounted problem; this technique was analyzed theoretically by Marbach & Tsitsiklis (2003); Kakade (2001b); Thomas (2014). The discounted value functions are given by:
Vâ¢7(s¢) = Ese ico, » vn Qâ¢7 (8,41) = Eseyiico, » vn (4) At+1:c0 1=0 1=0 A⢠(8p, a1) = Q⢠(St, a2) â V7 (81). (5)
The discounted approximation to the policy gradient is defined as follows: oo
oo f= Esso..0 DAMS: a0)Vo log 79 (a: | 3) : (6)
The following section discusses how to obtain biased (but not too biased) estimators for AÏ,γ, giving us noisy estimates of the discounted policy gradient in Equation (6). | 1506.02438#10 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 10 | Ci = NiNiâ1 (1) Dr = Do (2)
# 3.3 Local Pruning and Parameter Co-adaptation
During retraining, it is better to retain the weights from the initial training phase for the connections that survived pruning than it is to re-initialize the pruned layers. CNNs contain fragile co-adapted features [24]: gradient descent is able to ï¬nd a good solution when the network is initially trained, but not after re-initializing some layers and retraining them. So when we retrain the pruned layers, we should keep the surviving parameters instead of re-initializing them.
3
Table 1: Network pruning can save 9Ã to 13Ã parameters with no drop in predictive performance.
Network Top-1 Error Top-5 Error Parameters Compression Rate LeNet-300-100 Ref LeNet-300-100 Pruned LeNet-5 Ref LeNet-5 Pruned AlexNet Ref AlexNet Pruned VGG-16 Ref VGG-16 Pruned 1.64% 1.59% 0.80% 0.77% 42.78% 42.77% 31.50% 31.34% - - - - 19.73% 19.67% 11.32% 10.88% 267K 22K 431K 36K 61M 6.7M 138M 10.3M 12Ã 12Ã 9Ã 13Ã | 1506.02626#10 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 11 | The following section discusses how to obtain biased (but not too biased) estimators for AÏ,γ, giving us noisy estimates of the discounted policy gradient in Equation (6).
Before proceeding, we will introduce the notion of a γ-just estimator of the advantage function, which is an estimator that does not introduce bias when we use it in place of AÏ,γ (which is not known and must be estimated) in Equation (6) to estimate gγ.1 Consider an advantage estimator ËAt(s0:â, a0:â), which may in general be a function of the entire trajectory. Deï¬nition 1. The estimator ËAt is γ-just if
Esso:c0 [Ar(s0:205 a0:20)Vo log 9 (at | s1)| = Esso:00 [Aâ¢-7 (sz, at) Vo log 79 (a:z | 82)] - (7)
It follows immediately that if ËAt is γ-just for all t, then
oo Eso.c0 0:00 At(S0:00; do:00) Vo log mo (at | 3) = (8) t=0 | 1506.02438#11 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 11 | Retraining the pruned layers starting with retained weights requires less computation because we donât have to back propagate through the entire network. Also, neural networks are prone to suffer the vanishing gradient problem [25] as the networks get deeper, which makes pruning errors harder to recover for deep networks. To prevent this, we ï¬x the parameters for CONV layers and only retrain the FC layers after pruning the FC layers, and vice versa.
# Iterative Pruning
Learning the right connections is an iterative process. Pruning followed by a retraining is one iteration, after many such iterations the minimum number connections could be found. Without loss of accuracy, this method can boost pruning rate from 5à to 9à on AlexNet compared with single-step aggressive pruning. Each iteration is a greedy search in that we ï¬nd the best connections. We also experimented with probabilistically pruning parameters based on their absolute value, but this gave worse results.
# 3.5 Pruning Neurons | 1506.02626#11 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 12 | oo Eso.c0 0:00 At(S0:00; do:00) Vo log mo (at | 3) = (8) t=0
One sufï¬cient condition for ËAt to be γ-just is that ËAt decomposes as the difference between two functions Qt and bt, where Qt can depend on any trajectory variables but gives an unbiased estimator of the γ-discounted Q-function, and bt is an arbitrary function of the states and actions sampled before at. Proposition 1. Suppose that ËAt can be written in the form ËAt(s0:â, a0:â) = Qt(st:â, at:â) â bt(s0:t, a0:tâ1) such that for all (st, at), Est+1:â,at+1:â | st,at [Qt(st:â, at:â)] = QÏ,γ(st, at). Then ËA is γ-just.
1Note, that we have already introduced bias by using AÏ,γ in place of AÏ; here we are concerned with obtaining an unbiased estimate of gγ, which is a biased estimate of the policy gradient of the undiscounted MDP.
3 | 1506.02438#12 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 12 | # 3.5 Pruning Neurons
After pruning connections, neurons with zero input connections or zero output connections may be safely pruned. This pruning is furthered by removing all connections to or from a pruned neuron. The retraining phase automatically arrives at the result where dead neurons will have both zero input connections and zero output connections. This occurs due to gradient descent and regularization. A neuron that has zero input connections (or zero output connections) will have no contribution to the ï¬nal loss, leading the gradient to be zero for its output connection (or input connection), respectively. Only the regularization term will push the weights to zero. Thus, the dead neurons will be automatically removed during retraining.
# 4 Experiments
We implemented network pruning in Caffe [26]. Caffe was modiï¬ed to add a mask which disregards pruned parameters during network operation for each weight tensor. The pruning threshold is chosen as a quality parameter multiplied by the standard deviation of a layerâs weights. We carried out the experiments on Nvidia TitanX and GTX980 GPUs.
We pruned four representative networks: Lenet-300-100 and Lenet-5 on MNIST, together with AlexNet and VGG-16 on ImageNet. The network parameters and accuracy 1 before and after pruning are shown in Table 1.
# 4.1 LeNet on MNIST | 1506.02626#12 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 13 | 3
Published as a conference paper at ICLR 2016
The proof is provided in Appendix B. It is easy to verify that the following expressions are γ-just advantage estimators for ËAt:
© Deore e Aâ¢7 (54, at) © Qâ¢7 (81, a2) er, t+ (Stg1) â Vâ¢7 (82).
# 3 ADVANTAGE FUNCTION ESTIMATION
This section will be concerned with producing an accurate estimate ËAt of the discounted advan- tage function AÏ,γ(st, at), which will then be used to construct a policy gradient estimator of the following form:
N oo yb AV log mo(a? | s?) (9) n=1 t=0
where n indexes over a batch of episodes. Let V be an approximate value function. Deï¬ne δV t = rt + γV (st+1) â V (st), i.e., the TD residual of V with discount γ (Sutton & Barto, 1998). Note that δV t can be considered as an estimate of the advantage of the action at. In fact, if we have the correct value function V = V Ï,γ, then it is a γ-just advantage estimator, and in fact, an unbiased estimator of AÏ,γ: | 1506.02438#13 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 13 | # 4.1 LeNet on MNIST
We ï¬rst experimented on MNIST dataset with the LeNet-300-100 and LeNet-5 networks [4]. LeNet- 300-100 is a fully connected network with two hidden layers, with 300 and 100 neurons each, which achieves 1.6% error rate on MNIST. LeNet-5 is a convolutional network that has two convolutional layers and two fully connected layers, which achieves 0.8% error rate on MNIST. After pruning, the network is retrained with 1/10 of the original networkâs original learning rate. Table 1 shows
1Reference model is from Caffe model zoo, accuracy is measured without data augmentation
4
Table 2: For Lenet-300-100, pruning reduces the number of weights by 12Ã and computation by 12Ã.
Layer Weights fc1 fc2 fc3 Total 235K 30K 1K 266K FLOP Act% Weights% FLOP% 8% 470K 38% 65% 60K 9% 100% 26% 2K 8% 532K 46% 8% 4% 17% 8%
Table 3: For Lenet-5, pruning reduces the number of weights by 12Ã and computation by 6Ã. | 1506.02626#13 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 14 | Est+1 δV Ï,γ t = Est+1 [rt + γV Ï,γ(st+1) â V Ï,γ(st)] = Est+1 [QÏ,γ(st, at) â V Ï,γ(st)] = AÏ,γ(st, at). (10)
However, this estimator is only γ-just for V = V Ï,γ, otherwise it will yield biased policy gradient estimates. Next, let us consider taking the sum of k of these δ terms, which we will denote by ËA(k)
# t
# ËA(1) t ËA(2) t ËA(3) t
(11)
# := δV t t + γδV := δV t + γδV
= âV (st) + rt + γV (st+1) = âV (st) + rt + γrt+1 + γ2V (st+2)
(12)
t+1 t+1 + γ2δV
:= δV t+2 = âV (st) + rt + γrt+1 + γ2rt+2 + γ3V (st+3) (13) | 1506.02438#14 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02438 | 15 | k-1 AM; Soyo âV(se) tre trig He te rege $V (sie) (14) 1=0
# ËA(k) t
These equations result from a telescoping sum, and we see that ËA(k) involves a k-step estimate of the returns, minus a baseline term âV (st). Analogously to the case of δV , we can consider ËA(k) to be an estimator of the advantage function, which is only γ-just when V = V Ï,γ. However, t note that the bias generally becomes smaller as k â â, since the term γkV (st+k) becomes more heavily discounted, and the term âV (st) does not affect the bias. Taking k â â, we get
Al) = yy 541 = âV (st) + Vy (15)
which is simply the empirical returns minus the value function baseline.
4
Published as a conference paper at ICLR 2016
The generalized advantage estimator GAE(γ, λ) is deï¬ned as the exponentially-weighted average of these k-step estimators: | 1506.02438#15 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 15 | Figure 4: Visualization of the ï¬rst FC layerâs sparsity pattern of Lenet-300-100. It has a banded structure repeated 28 times, which correspond to the un-pruned parameters in the center of the images, since the digits are written in the center.
pruning saves 12à parameters on these networks. For each layer of the network the table shows (left to right) the original number of weights, the number of ï¬oating point operations to compute that layerâs activations, the average percentage of activations that are non-zero, the percentage of non-zero weights after pruning, and the percentage of actually required ï¬oating point operations. | 1506.02626#15 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 16 | The generalized advantage estimator GAE(γ, λ) is deï¬ned as the exponentially-weighted average of these k-step estimators:
AGAFON 1 (4)? + rAAP) 4 2A 4 ...) = (LâA)(5Y + ACY + 8tha) +72 (6F +901 + 775rh2) +...) =(L-A)(6/(L+AFAM +...) GL AF MY + A384...) by oro(AP + M4 A 4...) +...) - (a (+) b88.a(745) 7?68.2(5) b-) ( = VM) ohn 1=0
# ËAGAE(γ,λ)
# t
l=0
From Equation (16), we see that the advantage estimator has a remarkably simple formula involving a discounted sum of Bellman residual terms. Section 4 discusses an interpretation of this formula as the returns in an MDP with a modiï¬ed reward function. The construction we used above is closely analogous to the one used to deï¬ne TD(λ) (Sutton & Barto, 1998), however TD(λ) is an estimator of the value function, whereas here we are estimating the advantage function.
There are two notable special cases of this formula, obtained by setting λ = 0 and λ = 1.
# Ap := 5 Ar | 1506.02438#16 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 16 | An interesting byproduct is that network pruning detects visual attention regions. Figure 4 shows the sparsity pattern of the ï¬rst fully connected layer of LeNet-300-100, the matrix size is 784 â 300. It has 28 bands, each bandâs width 28, corresponding to the 28 à 28 input pixels. The colored regions of the ï¬gure, indicating non-zero parameters, correspond to the center of the image. Because digits are written in the center of the image, these are the important parameters. The graph is sparse on the left and right, corresponding to the less important regions on the top and bottom of the image. After pruning, the neural network ï¬nds the center of the image more important, and the connections to the peripheral regions are more heavily pruned.
# 4.2 AlexNet on ImageNet | 1506.02626#16 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 17 | There are two notable special cases of this formula, obtained by setting λ = 0 and λ = 1.
# Ap := 5 Ar
GAE(γ, 0) : = rt + γV (st+1) â V (st) (17)
GAEL): Ar = So 4'b41 = DO a're4i â Vs) (18) 1=0 1=0
GAE(γ, 1) is γ-just regardless of the accuracy of V , but it has high variance due to the sum of terms. GAE(γ, 0) is γ-just for V = V Ï,γ and otherwise induces bias, but it typically has much lower variance. The generalized advantage estimator for 0 < λ < 1 makes a compromise between bias and variance, controlled by parameter λ. | 1506.02438#17 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |
1506.02626 | 17 | # 4.2 AlexNet on ImageNet
We further examine the performance of pruning on the ImageNet ILSVRC-2012 dataset, which has 1.2M training examples and 50k validation examples. We use the AlexNet Caffe model as the reference model, which has 61 million parameters across 5 convolutional layers and 3 fully connected layers. The AlexNet Caffe model achieved a top-1 accuracy of 57.2% and a top-5 accuracy of 80.3%. The original AlexNet took 75 hours to train on NVIDIA Titan X GPU. After pruning, the whole network is retrained with 1/100 of the original networkâs initial learning rate. It took 173 hours to retrain the pruned AlexNet. Pruning is not used when iteratively prototyping the model, but rather used for model reduction when the model is ready for deployment. Thus, the retraining time is less a concern. Table 1 shows that AlexNet can be pruned to 1/9 of its original size without impacting accuracy, and the amount of computation can be reduced by 3Ã.
5
Table 4: For AlexNet, pruning reduces the number of weights by 9Ã and computation by 3Ã. | 1506.02626#17 | Learning both Weights and Connections for Efficient Neural Networks | Neural networks are both computationally intensive and memory intensive,
making them difficult to deploy on embedded systems. Also, conventional
networks fix the architecture before training starts; as a result, training
cannot improve the architecture. To address these limitations, we describe a
method to reduce the storage and computation required by neural networks by an
order of magnitude without affecting their accuracy by learning only the
important connections. Our method prunes redundant connections using a
three-step method. First, we train the network to learn which connections are
important. Next, we prune the unimportant connections. Finally, we retrain the
network to fine tune the weights of the remaining connections. On the ImageNet
dataset, our method reduced the number of parameters of AlexNet by a factor of
9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar
experiments with VGG-16 found that the number of parameters can be reduced by
13x, from 138 million to 10.3 million, again with no loss of accuracy. | http://arxiv.org/pdf/1506.02626 | Song Han, Jeff Pool, John Tran, William J. Dally | cs.NE, cs.CV, cs.LG | Published as a conference paper at NIPS 2015 | null | cs.NE | 20150608 | 20151030 | [
{
"id": "1507.06149"
},
{
"id": "1504.04788"
},
{
"id": "1510.00149"
}
] |
1506.02438 | 18 | Weâve described an advantage estimator with two separate parameters γ and λ, both of which con- tribute to the bias-variance tradeoff when using an approximate value function. However, they serve different purposes and work best with different ranges of values. γ most importantly determines the scale of the value function V Ï,γ, which does not depend on λ. Taking γ < 1 introduces bias into the policy gradient estimate, regardless of the value functionâs accuracy. On the other hand, λ < 1 introduces bias only when the value function is inaccurate. Empirically, we ï¬nd that the best value of λ is much lower than the best value of γ, likely because λ introduces far less bias than γ for a reasonably accurate value function. Using the generalized advantage estimator, we can construct a biased estimator of gγ, the discounted policy gradient from Equation (6):
xo x xo gf ~E Y- Vo trot sgaear| =E]S~ Vo log mo(ar | 81) SA) |. 19) t=0 t=0 l=0
where equality holds when λ = 1.
4
# INTERPRETATION AS REWARD SHAPING | 1506.02438#18 | High-Dimensional Continuous Control Using Generalized Advantage Estimation | Policy gradient methods are an appealing approach in reinforcement learning
because they directly optimize the cumulative reward and can straightforwardly
be used with nonlinear function approximators such as neural networks. The two
main challenges are the large number of samples typically required, and the
difficulty of obtaining stable and steady improvement despite the
nonstationarity of the incoming data. We address the first challenge by using
value functions to substantially reduce the variance of policy gradient
estimates at the cost of some bias, with an exponentially-weighted estimator of
the advantage function that is analogous to TD(lambda). We address the second
challenge by using trust region optimization procedure for both the policy and
the value function, which are represented by neural networks.
Our approach yields strong empirical results on highly challenging 3D
locomotion tasks, learning running gaits for bipedal and quadrupedal simulated
robots, and learning a policy for getting the biped to stand up from starting
out lying on the ground. In contrast to a body of prior work that uses
hand-crafted policy representations, our neural network policies map directly
from raw kinematics to joint torques. Our algorithm is fully model-free, and
the amount of simulated experience required for the learning tasks on 3D bipeds
corresponds to 1-2 weeks of real time. | http://arxiv.org/pdf/1506.02438 | John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel | cs.LG, cs.RO, cs.SY | null | null | cs.LG | 20150608 | 20181020 | [
{
"id": "1502.05477"
},
{
"id": "1509.02971"
},
{
"id": "1510.09142"
}
] |