doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1504.00325
18
# 4.2 Human Agreement for Word Prediction We can do a similar analysis for human agreement at the sub-task of word prediction. Consider the task of tagging the image with words that occur in the captions. For this task, we can compute the human precision and recall for TABLE 2: Model defintions. # o w n k q p object or visual concept = = word associated with o total number of images = number of captions per image = P (o = 1) = P (w = 1|o = 1) = a given word w by benchmarking words used in the k+1 human caption with respect to words used in the first k reference captions. Note that we use weighted versions of precision and recall, where each negative image has a weight of 1 and each positive image has a weight equal to the number of captions containing the word w. Human precision (Hp) and human recall (Hr) can be computed from the counts of how many subjects out of k use the word w to describe a given image over the whole dataset.
1504.00325#18
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
19
We plot Hp versus Hr for a set of nouns, verbs and adjectives, and all 1000 words considered in Figure 3. Nouns referring to animals like ‘elephant’ have a high recall, which means that if an ‘elephant’ exists in the image, a subject is likely to talk about it (which makes intuitive sense, given ‘elephant’ images are somewhat rare, and there are no alternative words that could be used instead of ‘elephant’). On the other hand, an adjective like ‘bright’ is used inconsistently and hence has low recall. Interestingly, words with high recall also have high precision. Indeed, all the points of human agreement appear to lie on a one-dimensional curve in the two-dimension precision-recall space.
1504.00325#19
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
20
This observation motivates us to propose a simple model for when subjects use a particular word w for describing an image. Let o denote an object or visual concept associated with word w, n be the total number of images, and k be the number of reference captions. Next, let q = P (o = 1) be the probability that object o exists in an image. For clarity these definitions are summarized in Table 2. We make two simplifications. First, we ig- nore image level saliency and instead focus on word level saliency. Specifically, we only model p = P (w = 1|o = 1), the probability a subject uses w given that o is in the image, without conditioning on the image itself. Second, we assume that P (w = 1|o = 0) = 0, i.e. that a subject does not use w unless o is in the image. As we will show, even with these simplifications our model suffices to explain the empirical observations in Figure 3 to a reasonable degree of accuracy.
1504.00325#20
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
21
Given these assumptions, we can model human preci- sion Hy and recall H, for a word w given only p and k. First, given k captions per image, we need to compute the expected number of (1) captions containing w (cw), (2) true positives (tp), and (3) false positives (fp). Note that in our definition there can be up to k true positives per image (if cw = k, i.e. each of the k captions contains word w) but at most 1 false positive (if none of the k captions contains w). The expectations, in terms of k, p, and q are:
1504.00325#21
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
22
and q are: E[cw] = Σk i=1P (wi = 1) = ΣiP (wi = 1|o = 1)P (o = 1) +ΣiP (wi = 1|o = 0)P (o = 0) = kpq + 0 = kpq E[tp] = Σk i=1P (wi = 1 ∧ wk+1 = 1) = ΣiP (wi = 1 ∧ wk+1 = 1|o = 1)P (o = 1) +ΣiP (wi = 1 ∧ wk+1 = 1|o = 0)P (o = 0) = kppq + 0 = kp2q E[f p] = P (w1 . . . wk = 0 ∧ wk+1 = 1) = P (o = 1 ∧ w1 . . . wk = 0 ∧ wk+1 = 1) +P (o = 0 ∧ w1 . . . wk = 0 ∧ wk+1 = 1) = q(1 − p)kp + 0 = q(1 − p)kp
1504.00325#22
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
23
In the above wi = 1 denotes that w appeared in the ith caption. Note that we are also assuming independence between subjects conditioned on o. We can now define model precision and recall as: ir : nE p] pk » nEftp] + nE[ fp] ~ pk + (1 —p)é 7 nE[tp] _ Hy 3 nE|cu] P Note that these expressions are independent of q and only depend on p. Interestingly, because of the use of weighted precision and recall, the recall for a category comes out to be exactly equal to p, the probability a subject uses w given that o is in the image.
1504.00325#23
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
24
We set k = 4 and vary p to plot Hp versus H,, getting the curve as shown in blue in Figure 3 (bottom left). The curve explains the observed data quite well, closely matching the precision-recall tradeoffs of the empirical data (although not perfectly). We can also reduce the number of captions from four, and look at how the empirical and predicted precision and recall change. Figure 3 (bottom right), shows this variation as we reduce the number of reference captions per image from four to one annotations. We see that the points of human agreement remain at the same recall value, but decrease in their precision, which is consistent with what the model predicts. Also, the human precision at infinite subjects will approach one, which is again reasonable given that a subject will only use the word w if the corresponding object is in the image (and in the presence of infinite subjects someone else will also use the word w). In fact, the fixed recall value can help us recover p, the probability that a subject will use the word w in describing the image given the object is present. Nouns like ‘elephant’ and ‘tennis’ have large p, which is reasonable. Verbs and adjectives, on the other hand, have smaller p values, which can be justified from the fact that a) subjects are less likely to describe attributes 5 6
1504.00325#24
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
25
5 6 Nouns boy ote Precision Precision Adjectives Precision Recall Precision Recall Recall "Recall Precision —number of reference = 1 Recall Fig. 3: Precision-recall points for human agreement: we compute precision and recall by treating one human caption as prediction and benchmark it against the others to obtain points on the precision recall curve. We plot these points for example nouns (top left), adjectives (top center), and verbs (top right), and for all words (bottom left). We also plot the fit of our model for human agreement with the empirical data (bottom left) and show how the human agreement changes with different number of captions being used (bottom right). We see that the human agreement point remains at the same recall value but dips in precision when using fewer captions. of objects and b) subjects might use a different word (synonym) to describe the same attribute. This analysis of human agreement also motivates us- ing a different metric for measuring performance. We propose Precision at Human Recall (PHR) as a metric for measuring performance of a vision system perform- ing this task. Given that human recall for a particular word is fixed and precision varies with the number of annotations, we can look at system precision at human recall and compare it with human precision to report the performance of the vision system.
1504.00325#25
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
26
5 EVALUATION SERVER INSTRUCTIONS Directions on how to use the MS COCO caption evalu- ation server can be found on the MS COCO website. The evaluation server is hosted by CodaLab. To par- ticipate, a user account on CodaLab must be created. The participants need to generate results on both the validation and testing datasets. When training for the generation of results on the test dataset, the training and validation dataset may be used as the participant sees fit. That is, the validation dataset may be used for training if desired. However, when generating results on the validation set, we ask participants to only train on the training dataset, and only use the validation dataset for tuning meta-parameters. Two JSON files should be created corresponding to results on each dataset in the following format: [{ “image id” “caption” }] : : int, str, The results may then be placed into a zip file and uploaded to the server for evaluation. Code is also provided on GitHub to evaluate results on the validation dataset without having to upload to the server. The number of submissions per user is limited to a fixed amount. # 6 DISCUSSION
1504.00325#26
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
27
# 6 DISCUSSION Many challenges exist when creating an image caption dataset. As stated in [7], [42], [45] the captions generated by human subjects can vary significantly. However even though two captions may be very different, they may be judged equally “good” by human subjects. Designing effective automatic evaluation metrics that are highly correlated with human judgment remains a difficult challenge [7], [42], [45], [46]. We hope that by releasing results on the validation data, we can help enable future research in this area. Since automatic evaluation metrics do not always correspond to human judgment, we hope to conduct experiments using human subjects to judge the quality of automatically generated captions, which are most similar to human captions, and whether they are grammatically correct [45], [42], [7], [4], [5]. This is essential to determin- ing whether future algorithms are indeed improving, or whether they are merely over fitting to a specific metric. These human experiments will also allow us to evaluate the automatic evaluation metrics themselves, and see which ones are correlated to human judgment. # REFERENCES [1] K. Barnard and D. Forsyth, “Learning the semantics of words and pictures,” in ICCV, vol. 2, 2001, pp. 408–415.
1504.00325#27
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
28
[2] K. Barnard, P. Duygulu, D. Forsyth, N. De Freitas, D. M. Blei, and M. I. Jordan, “Matching words and pictures,” JMLR, vol. 3, pp. 1107–1135, 2003. [3] V. Lavrenko, R. Manmatha, and J. Jeon, “A model for learning the semantics of pictures,” in NIPS, 2003. [4] G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg, “Baby talk: Understanding and generating simple image descriptions,” in CVPR, 2011. [5] M. Mitchell, X. Han, J. Dodge, A. Mensch, A. Goyal, A. Berg, K. Yamaguchi, T. Berg, K. Stratos, and H. Daum´e III, “Midge: Generating image descriptions from computer vision detections,” in EACL, 2012.
1504.00325#28
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
29
[6] A. Farhadi, M. Hejrati, M. A. Sadeghi, P. Young, C. Rashtchian, J. Hockenmaier, and D. Forsyth, “Every picture tells a story: Generating sentences from images,” in ECCV, 2010. [7] M. Hodosh, P. Young, and J. Hockenmaier, “Framing image de- scription as a ranking task: Data, models and evaluation metrics.” JAIR, vol. 47, pp. 853–899, 2013. [8] P. Kuznetsova, V. Ordonez, A. C. Berg, T. L. Berg, and Y. Choi, “Collective generation of natural image descriptions,” in ACL, 2012. [9] Y. Yang, C. L. Teo, H. Daum´e III, and Y. Aloimonos, “Corpus- guided sentence generation of natural images,” in EMNLP, 2011. [10] A. Gupta, Y. Verma, and C. Jawahar, “Choosing linguistics over vision to describe images.” in AAAI, 2012.
1504.00325#29
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
30
vision to describe images.” in AAAI, 2012. [11] E. Bruni, G. Boleda, M. Baroni, and N.-K. Tran, “Distributional semantics in technicolor,” in ACL, 2012. [12] Y. Feng and M. Lapata, “Automatic caption generation for news images,” TPAMI, vol. 35, no. 4, pp. 797–812, 2013. [13] D. Elliott and F. Keller, “Image description using visual depen- dency representations.” in EMNLP, 2013, pp. 1292–1302. [14] A. Karpathy, A. Joulin, and F.-F. Li, “Deep fragment embeddings for bidirectional image sentence mapping,” in NIPS, 2014. [15] Y. Gong, L. Wang, M. Hodosh, J. Hockenmaier, and S. Lazebnik, “Improving image-sentence embeddings using large weakly an- notated photo collections,” in ECCV, 2014, pp. 529–545. [16] R. Mason and E. Charniak, “Nonparametric method for data- driven image captioning,” in ACL, 2014.
1504.00325#30
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
31
[16] R. Mason and E. Charniak, “Nonparametric method for data- driven image captioning,” in ACL, 2014. [17] P. Kuznetsova, V. Ordonez, T. Berg, and Y. Choi, “Treetalk: Com- position and compression of trees for image descriptions,” TACL, vol. 2, pp. 351–362, 2014. [18] K. Ramnath, S. Baker, L. Vanderwende, M. El-Saban, S. N. Sinha, A. Kannan, N. Hassan, M. Galley, Y. Yang, D. Ramanan, A. Bergamo, and L. Torresani, “Autocaption: Automatic caption generation for personal photos,” in WACV, 2014. [19] A. Lazaridou, E. Bruni, and M. Baroni, “Is this a wampimuk? cross-modal mapping between distributional semantics and the visual world,” in ACL, 2014. [20] R. Kiros, R. Salakhutdinov, and R. Zemel, “Multimodal neural language models,” in ICML, 2014.
1504.00325#31
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
32
[21] J. Mao, W. Xu, Y. Yang, J. Wang, and A. L. Yuille, “Explain im- ages with multimodal recurrent neural networks,” arXiv preprint arXiv:1410.1090, 2014. [22] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, “Show and tell: A neural image caption generator,” arXiv preprint arXiv:1411.4555, 2014. [23] A. Karpathy and L. Fei-Fei, “Deep visual-semantic alignments for generating image descriptions,” arXiv preprint arXiv:1412.2306, 2014. [24] R. Kiros, R. Salakhutdinov, and R. S. Zemel, “Unifying visual- semantic embeddings with multimodal neural language models,” arXiv preprint arXiv:1411.2539, 2014.
1504.00325#32
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
33
[25] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolutional networks for visual recognition and description,” arXiv preprint arXiv:1411.4389, 2014. [26] H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, P. Doll´ar, J. Gao, X. He, M. Mitchell, J. Platt et al., “From captions to visual concepts and back,” arXiv preprint arXiv:1411.4952, 2014. [27] X. Chen and C. L. Zitnick, “Learning a recurrent visual representa- tion for image caption generation,” arXiv preprint arXiv:1411.5654, 2014. [28] R. Lebret, P. O. Pinheiro, and R. Collobert, “Phrase-based image captioning,” arXiv preprint arXiv:1502.03671, 2015.
1504.00325#33
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
34
[29] ——, “Simple image description generator via a linear phrase- based approach,” arXiv preprint arXiv:1412.8419, 2014. [30] A. Lazaridou, N. T. Pham, and M. Baroni, “Combining language and vision with a multimodal skip-gram model,” arXiv preprint arXiv:1501.02598, 2015. [31] A. Krizhevsky, I. Sutskever, and G. Hinton, “ImageNet classifica- tion with deep convolutional neural networks,” in NIPS, 2012. [32] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. [33] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Im- ageNet: A Large-Scale Hierarchical Image Database,” in CVPR, 2009.
1504.00325#34
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
35
[34] M. Grubinger, P. Clough, H. M ¨uller, and T. Deselaers, “The iapr tc- 12 benchmark: A new evaluation resource for visual information systems,” in LREC Workshop on Language Resources for Content- based Image Retrieval, 2006. [35] V. Ordonez, G. Kulkarni, and T. Berg, “Im2text: Describing images using 1 million captioned photographs.” in NIPS, 2011. [36] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier, “From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions,” TACL, vol. 2, pp. 67– 78, 2014. [37] J. Chen, P. Kuznetsova, D. Warren, and Y. Choi, “D´ej´a image- captions: A corpus of expressive image descriptions in repetition,” in NAACL, 2015.
1504.00325#35
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
36
[38] T. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick, “Microsoft COCO: Common objects in context,” in ECCV, 2014. [39] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “Bleu: a method for automatic evaluation of machine translation,” in ACL, 2002. [40] C.-Y. Lin, “Rouge: A package for automatic evaluation of sum- maries,” in ACL Workshop, 2004. [41] M. Denkowski and A. Lavie, “Meteor universal: Language spe- cific translation evaluation for any target language,” in EACL Workshop on Statistical Machine Translation, 2014. “Cider: Consensus-based image description evaluation,” arXiv preprint arXiv:1411.5726, 2014.
1504.00325#36
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1504.00325
37
“Cider: Consensus-based image description evaluation,” arXiv preprint arXiv:1411.5726, 2014. J. Bethard, and D. McClosky, “The Stanford CoreNLP natural language processing toolkit,” in Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, 2014, pp. 55–60. [Online]. Available: http: //www.aclweb.org/anthology/P/P14/P14-5010 [44] G. A. Miller, “Wordnet: a lexical database for english,” Communi- cations of the ACM, vol. 38, no. 11, pp. 39–41, 1995. [45] D. Elliott and F. Keller, “Comparing automatic evaluation mea- sures for image description,” in Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, vol. 2, 2014, pp. 452–457. [46] C. Callison-Burch, M. Osborne, and P. Koehn, “Re-evaluation the role of bleu in machine translation research.” in EACL, vol. 6, 2006, pp. 249–256. 7
1504.00325#37
Microsoft COCO Captions: Data Collection and Evaluation Server
In this paper we describe the Microsoft COCO Caption dataset and evaluation server. When completed, the dataset will contain over one and a half million captions describing over 330,000 images. For the training and validation images, five independent human generated captions will be provided. To ensure consistency in evaluation of automatic caption generation algorithms, an evaluation server is used. The evaluation server receives candidate captions and scores them using several popular metrics, including BLEU, METEOR, ROUGE and CIDEr. Instructions for using the evaluation server are provided.
http://arxiv.org/pdf/1504.00325
Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, C. Lawrence Zitnick
cs.CV, cs.CL
arXiv admin note: text overlap with arXiv:1411.4952
null
cs.CV
20150401
20150403
[ { "id": "1502.03671" }, { "id": "1501.02598" } ]
1503.02531
1
# Geoffrey Hinton∗ † Google Inc. Mountain View [email protected] Oriol Vinyals† Google Inc. Mountain View [email protected] # Jeff Dean Google Inc. Mountain View [email protected] # Abstract A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions [3]. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow de- ployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators [1] have shown that it is possible to compress the knowledge in an ensemble into a single model which is much eas- ier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full mod- els confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel. # 1 Introduction
1503.02531#1
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
2
# 1 Introduction Many insects have a larval form that is optimized for extracting energy and nutrients from the envi- ronment and a completely different adult form that is optimized for the very different requirements of traveling and reproduction. In large-scale machine learning, we typically use very similar models for the training stage and the deployment stage despite their very different requirements: For tasks like speech and object recognition, training must extract structure from very large, highly redundant datasets but it does not need to operate in real time and it can use a huge amount of computation. Deployment to a large number of users, however, has much more stringent requirements on latency and computational resources. The analogy with insects suggests that we should be willing to train very cumbersome models if that makes it easier to extract structure from the data. The cumbersome model could be an ensemble of separately trained models or a single very large model trained with a very strong regularizer such as dropout [9]. Once the cumbersome model has been trained, we can then use a different kind of training, which we call “distillation” to transfer the knowledge from the cumbersome model to a small model that is more suitable for deployment. A version of this strategy has already been pioneered by Rich Caruana and his collaborators [1]. In their important paper they demonstrate convincingly that the knowledge acquired by a large ensemble of models can be transferred to a single small model.
1503.02531#2
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
3
A conceptual block that may have prevented more investigation of this very promising approach is that we tend to identify the knowledge in a trained model with the learned parameter values and this makes it hard to see how we can change the form of the model but keep the same knowledge. A more abstract view of the knowledge, that frees it from any particular instantiation, is that it is a learned ∗Also affiliated with the University of Toronto and the Canadian Institute for Advanced Research. †Equal contribution. 1 mapping from input vectors to output vectors. For cumbersome models that learn to discriminate between a large number of classes, the normal training objective is to maximize the average log probability of the correct answer, but a side-effect of the learning is that the trained model assigns probabilities to all of the incorrect answers and even when these probabilities are very small, some of them are much larger than others. The relative probabilities of incorrect answers tell us a lot about how the cumbersome model tends to generalize. An image of a BMW, for example, may only have a very small chance of being mistaken for a garbage truck, but that mistake is still many times more probable than mistaking it for a carrot.
1503.02531#3
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
4
It is generally accepted that the objective function used for training should reflect the true objective of the user as closely as possible. Despite this, models are usually trained to optimize performance on the training data when the real objective is to generalize well to new data. It would clearly be better to train models to generalize well, but this requires information about the correct way to generalize and this information is not normally available. When we are distilling the knowledge from a large model into a small one, however, we can train the small model to generalize in the same way as the large model. If the cumbersome model generalizes well because, for example, it is the average of a large ensemble of different models, a small model trained to generalize in the same way will typically do much better on test data than a small model that is trained in the normal way on the same training set as was used to train the ensemble.
1503.02531#4
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
5
An obvious way to transfer the generalization ability of the cumbersome model to a small model is to use the class probabilities produced by the cumbersome model as “soft targets” for training the small model. For this transfer stage, we could use the same training set or a separate “transfer” set. When the cumbersome model is a large ensemble of simpler models, we can use an arithmetic or geometric mean of their individual predictive distributions as the soft targets. When the soft targets have high entropy, they provide much more information per training case than hard targets and much less variance in the gradient between training cases, so the small model can often be trained on much less data than the original cumbersome model and using a much higher learning rate.
1503.02531#5
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
6
For tasks like MNIST in which the cumbersome model almost always produces the correct answer with very high confidence, much of the information about the learned function resides in the ratios of very small probabilities in the soft targets. For example, one version of a 2 may be given a probability of 10−6 of being a 3 and 10−9 of being a 7 whereas for another version it may be the other way around. This is valuable information that defines a rich similarity structure over the data (i. e. it says which 2’s look like 3’s and which look like 7’s) but it has very little influence on the cross-entropy cost function during the transfer stage because the probabilities are so close to zero. Caruana and his collaborators circumvent this problem by using the logits (the inputs to the final softmax) rather than the probabilities produced by the softmax as the targets for learning the small model and they minimize the squared difference between the logits produced by the cumbersome model and the logits produced by the small model. Our more general solution, called “distillation”, is to raise the temperature of the final softmax until the
1503.02531#6
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
7
the logits produced by the small model. Our more general solution, called “distillation”, is to raise the temperature of the final softmax until the cumbersome model produces a suitably soft set of targets. We then use the same high temperature when training the small model to match these soft targets. We show later that matching the logits of the cumbersome model is actually a special case of distillation.
1503.02531#7
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
8
The transfer set that is used to train the small model could consist entirely of unlabeled data [1] or we could use the original training set. We have found that using the original training set works well, especially if we add a small term to the objective function that encourages the small model to predict the true targets as well as matching the soft targets provided by the cumbersome model. Typically, the small model cannot exactly match the soft targets and erring in the direction of the correct answer turns out to be helpful. # 2 Distillation Neural networks typically produce class probabilities by using a “softmax” output layer that converts the logit, zi, computed for each class into a probability, qi, by comparing zi with the other logits. qi = exp(zi/T ) j exp(zj/T ) (1)
1503.02531#8
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
9
# P 2 where T is a temperature that is normally set to 1. Using a higher value for T produces a softer probability distribution over classes. In the simplest form of distillation, knowledge is transferred to the distilled model by training it on a transfer set and using a soft target distribution for each case in the transfer set that is produced by using the cumbersome model with a high temperature in its softmax. The same high temperature is used when training the distilled model, but after it has been trained it uses a temperature of 1.
1503.02531#9
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
10
When the correct labels are known for all or some of the transfer set, this method can be significantly improved by also training the distilled model to produce the correct labels. One way to do this is to use the correct labels to modify the soft targets, but we found that a better way is to simply use a weighted average of two different objective functions. The first objective function is the cross entropy with the soft targets and this cross entropy is computed using the same high temperature in the softmax of the distilled model as was used for generating the soft targets from the cumbersome model. The second objective function is the cross entropy with the correct labels. This is computed using exactly the same logits in softmax of the distilled model but at a temperature of 1. We found that the best results were generally obtained by using a condiderably lower weight on the second objective function. Since the magnitudes of the gradients produced by the soft targets scale as 1/T 2 it is important to multiply them by T 2 when using both hard and soft targets. This ensures that the relative contributions of the hard and soft targets remain roughly unchanged if the temperature used for distillation is changed while experimenting with meta-parameters. # 2.1 Matching logits is a special case of distillation
1503.02531#10
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
11
# 2.1 Matching logits is a special case of distillation Each case in the transfer set contributes a cross-entropy gradient, dC/dzi, with respect to each logit, zi of the distilled model. If the cumbersome model has logits vi which produce soft target probabilities pi and the transfer training is done at a temperature of T , this gradient is given by: ∂C ∂zi = 1 T (qi − pi) = 1 T ezi/T j ezj/T − evi/T j evj /T ! (2)
1503.02531#11
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
13
So in the high temperature limit, distillation is equivalent to minimizing 1/2(zi − vi)2, provided the logits are zero-meaned separately for each transfer case. At lower temperatures, distillation pays much less attention to matching logits that are much more negative than the average. This is poten- tially advantageous because these logits are almost completely unconstrained by the cost function used for training the cumbersome model so they could be very noisy. On the other hand, the very negative logits may convey useful information about the knowledge acquired by the cumbersome model. Which of these effects dominates is an empirical question. We show that when the distilled model is much too small to capture all of the knowledege in the cumbersome model, intermedi- ate temperatures work best which strongly suggests that ignoring the large negative logits can be helpful. # 3 Preliminary experiments on MNIST To see how well distillation works, we trained a single large neural net with two hidden layers of 1200 rectified linear hidden units on all 60,000 training cases. The net was strongly regularized using dropout and weight-constraints as described in [5]. Dropout can be viewed as a way of training an exponentially large ensemble of models that share weights. In addition, the input images were 3
1503.02531#13
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
14
3 jittered by up to two pixels in any direction. This net achieved 67 test errors whereas a smaller net with two hidden layers of 800 rectified linear hidden units and no regularization achieved 146 errors. But if the smaller net was regularized solely by adding the additional task of matching the soft targets produced by the large net at a temperature of 20, it achieved 74 test errors. This shows that soft targets can transfer a great deal of knowledge to the distilled model, including the knowledge about how to generalize that is learned from translated training data even though the transfer set does not contain any translations. When the distilled net had 300 or more units in each of its two hidden layers, all temperatures above 8 gave fairly similar results. But when this was radically reduced to 30 units per layer, temperatures in the range 2.5 to 4 worked significantly better than higher or lower temperatures.
1503.02531#14
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
15
We then tried omitting all examples of the digit 3 from the transfer set. So from the perspective of the distilled model, 3 is a mythical digit that it has never seen. Despite this, the distilled model only makes 206 test errors of which 133 are on the 1010 threes in the test set. Most of the errors are caused by the fact that the learned bias for the 3 class is much too low. If this bias is increased by 3.5 (which optimizes overall performance on the test set), the distilled model makes 109 errors of which 14 are on 3s. So with the right bias, the distilled model gets 98.6% of the test 3s correct despite never having seen a 3 during training. If the transfer set contains only the 7s and 8s from the training set, the distilled model makes 47.3% test errors, but when the biases for 7 and 8 are reduced by 7.6 to optimize test performance, this falls to 13.2% test errors. # 4 Experiments on speech recognition In this section, we investigate the effects of ensembling Deep Neural Network (DNN) acoustic models that are used in Automatic Speech Recognition (ASR). We show that the distillation strategy that we propose in this paper achieves the desired effect of distilling an ensemble of models into a single model that works significantly better than a model of the same size that is learned directly from the same training data.
1503.02531#15
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
16
State-of-the-art ASR systems currently use DNNs to map a (short) temporal context of features derived from the waveform to a probability distribution over the discrete states of a Hidden Markov Model (HMM) [4]. More specifically, the DNN produces a probability distribution over clusters of tri-phone states at each time and a decoder then finds a path through the HMM states that is the best compromise between using high probability states and producing a transcription that is probable under the language model. Although it is possible (and desirable) to train the DNN in such a way that the decoder (and, thus, the language model) is taken into account by marginalizing over all possible paths, it is common to train the DNN to perform frame-by-frame classification by (locally) minimizing the cross entropy between the predictions made by the net and the labels given by a forced alignment with the ground truth sequence of states for each observation: θ = arg max θ′ P (ht|st; θ′)
1503.02531#16
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
17
θ = arg max θ′ P (ht|st; θ′) where θ are the parameters of our acoustic model P which maps acoustic observations at time t, st, to a probability, P (ht|st; θ′) , of the “correct” HMM state ht, which is determined by a forced alignment with the correct sequence of words. The model is trained with a distributed stochastic gradient descent approach. We use an architecture with 8 hidden layers each containing 2560 rectified linear units and a final softmax layer with 14,000 labels (HMM targets ht). The input is 26 frames of 40 Mel-scaled filter- bank coefficients with a 10ms advance per frame and we predict the HMM state of 21st frame. The total number of parameters is about 85M. This is a slightly outdated version of the acoustic model used by Android voice search, and should be considered as a very strong baseline. To train the DNN acoustic model we use about 2000 hours of spoken English data, which yields about 700M training examples. This system achieves a frame accuracy of 58.9%, and a Word Error Rate (WER) of 10.9% on our development set. 4
1503.02531#17
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
18
4 Test Frame Accuracy WER 10.9% 10.7% 10.7% System Baseline 10xEnsemble Distilled Single model 58.9% 61.1% 60.8% Table 1: Frame classification accuracy and WER showing that the distilled single model performs about as well as the averaged predictions of 10 models that were used to create the soft targets. # 4.1 Results We trained 10 separate models to predict P (ht|st; θ), using exactly the same architecture and train- ing procedure as the baseline. The models are randomly initialized with different initial parameter values and we find that this creates sufficient diversity in the trained models to allow the averaged predictions of the ensemble to significantly outperform the individual models. We have explored adding diversity to the models by varying the sets of data that each model sees, but we found this to not significantly change our results, so we opted for the simpler approach. For the distillation we tried temperatures of [1, 2, 5, 10] and used a relative weight of 0.5 on the cross-entropy for the hard targets, where bold font indicates the best value that was used for table 1 .
1503.02531#18
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
19
Table 1 shows that, indeed, our distillation approach is able to extract more useful information from the training set than simply using the hard labels to train a single model. More than 80% of the improvement in frame classification accuracy achieved by using an ensemble of 10 models is trans- ferred to the distilled model which is similar to the improvement we observed in our preliminary experiments on MNIST. The ensemble gives a smaller improvement on the ultimate objective of WER (on a 23K-word test set) due to the mismatch in the objective function, but again, the im- provement in WER achieved by the ensemble is transferred to the distilled model. We have recently become aware of related work on learning a small acoustic model by matching the class probabilities of an already trained larger model [8]. However, they do the distillation at a temperature of 1 using a large unlabeled dataset and their best distilled model only reduces the error rate of the small model by 28% of the gap between the error rates of the large and small models when they are both trained with hard labels. # 5 Training ensembles of specialists on very big datasets
1503.02531#19
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
20
# 5 Training ensembles of specialists on very big datasets Training an ensemble of models is a very simple way to take advantage of parallel computation and the usual objection that an ensemble requires too much computation at test time can be dealt with by using distillation. There is, however, another important objection to ensembles: If the individual models are large neural networks and the dataset is very large, the amount of computation required at training time is excessive, even though it is easy to parallelize. In this section we give an example of such a dataset and we show how learning specialist models that each focus on a different confusable subset of the classes can reduce the total amount of computation required to learn an ensemble. The main problem with specialists that focus on making fine-grained distinctions is that they overfit very easily and we describe how this overfitting may be prevented by using soft targets. # 5.1 The JFT dataset
1503.02531#20
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
21
# 5.1 The JFT dataset JFT is an internal Google dataset that has 100 million labeled images with 15,000 labels. When we did this work, Google’s baseline model for JFT was a deep convolutional neural network [7] that had been trained for about six months using asynchronous stochastic gradient descent on a large number of cores. This training used two types of parallelism [2]. First, there were many replicas of the neural net running on different sets of cores and processing different mini-batches from the training set. Each replica computes the average gradient on its current mini-batch and sends this gradient to a sharded parameter server which sends back new values for the parameters. These new values reflect all of the gradients received by the parameter server since the last time it sent parameters to the replica. Second, each replica is spread over multiple cores by putting different subsets of the neurons on each core. Ensemble training is yet a third type of parallelism that can be wrapped 5 JFT 1: Tea party; Easter; Bridal shower; Baby shower; Easter Bunny; ... JFT 2: Bridge; Cable-stayed bridge; Suspension bridge; Viaduct; Chimney; ... JFT 3: Toyota Corolla E100; Opel Signum; Opel Astra; Mazda Familia; ... Table 2: Example classes from clusters computed by our covariance matrix clustering algorithm
1503.02531#21
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
22
Table 2: Example classes from clusters computed by our covariance matrix clustering algorithm around the other two types, but only if a lot more cores are available. Waiting for several years to train an ensemble of models was not an option, so we needed a much faster way to improve the baseline model. # 5.2 Specialist Models When the number of classes is very large, it makes sense for the cumbersome model to be an en- semble that contains one generalist model trained on all the data and many “specialist” models, each of which is trained on data that is highly enriched in examples from a very confusable subset of the classes (like different types of mushroom). The softmax of this type of specialist can be made much smaller by combining all of the classes it does not care about into a single dustbin class. To reduce overfitting and share the work of learning lower level feature detectors, each specialist model is initialized with the weights of the generalist model. These weights are then slightly modi- fied by training the specialist with half its examples coming from its special subset and half sampled at random from the remainder of the training set. After training, we can correct for the biased train- ing set by incrementing the logit of the dustbin class by the log of the proportion by which the specialist class is oversampled. # 5.3 Assigning classes to specialists
1503.02531#22
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
23
# 5.3 Assigning classes to specialists In order to derive groupings of object categories for the specialists, we decided to focus on categories that our full network often confuses. Even though we could have computed the confusion matrix and used it as a way to find such clusters, we opted for a simpler approach that does not require the true labels to construct the clusters. In particular, we apply a clustering algorithm to the covariance matrix of the predictions of our generalist model, so that a set of classes Sm that are often predicted together will be used as targets for one of our specialist models, m. We applied an on-line version of the K-means algorithm to the columns of the covariance matrix, and obtained reasonable clusters (shown in Table 2). We tried several clustering algorithms which produced similar results. # 5.4 Performing inference with ensembles of specialists Before investigating what happens when specialist models are distilled, we wanted to see how well ensembles containing specialists performed. In addition to the specialist models, we always have a generalist model so that we can deal with classes for which we have no specialists and so that we can decide which specialists to use. Given an input image x, we do top-one classification in two steps:
1503.02531#23
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
24
Step 1: For each test case, we find the n most probable classes according to the generalist model. Call this set of classes k. In our experiments, we used n = 1. Step 2: We then take all the specialist models, m, whose special subset of confusable classes, Sm, has a non-empty intersection with k and call this the active set of specialists Ak (note that this set may be empty). We then find the full probability distribution q over all the classes that minimizes: KL(pg, q) + KL(pm, q) (5) Xm∈Ak where KL denotes the KL divergence, and pm pg denote the probability distribution of a specialist model or the generalist full model. The distribution pm is a distribution over all the specialist classes of m plus a single dustbin class, so when computing its KL divergence from the full q distribution we sum all of the probabilities that the full q distribution assigns to all the classes in m’s dustbin. 6 System Baseline + 61 Specialist models Conditional Test Accuracy 43.1% 45.9% Test Accuracy 25.0% 26.1% Table 3: Classification accuracy (top 1) on the JFT development set.
1503.02531#24
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
25
Table 3: Classification accuracy (top 1) on the JFT development set. delta in top1 correct 0 +1421 +1572 +1124 +835 +561 +362 +232 +182 +208 +324 # of test examples 350037 141993 67161 38801 26298 16474 10682 7376 4703 4706 9082 # of specialists covering 0 1 2 3 4 5 6 7 8 9 10 or more relative accuracy change 0.0% +3.4% +7.4% +8.8% +10.5% +11.1% +11.3% +12.8% +13.6% +16.6% +14.1% Table 4: Top 1 accuracy improvement by # of specialist models covering correct class on the JFT test set. Eq. 5 does not have a general closed form solution, though when all the models produce a single probability for each class the solution is either the arithmetic or geometric mean, depending on whether we use KL(p, q) or KL(q, p)). We parameterize q = sof tmax(z) (with T = 1) and we use gradient descent to optimize the logits z w.r.t. eq. 5. Note that this optimization must be carried out for each image. # 5.5 Results
1503.02531#25
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
26
# 5.5 Results Starting from the trained baseline full network, the specialists train extremely fast (a few days in- stead of many weeks for JFT). Also, all the specialists are trained completely independently. Table 3 shows the absolute test accuracy for the baseline system and the baseline system combined with the specialist models. With 61 specialist models, there is a 4.4% relative improvement in test ac- curacy overall. We also report conditional test accuracy, which is the accuracy by only considering examples belonging to the specialist classes, and restricting our predictions to that subset of classes. For our JFT specialist experiments, we trained 61 specialist models, each with 300 classes (plus the dustbin class). Because the sets of classes for the specialists are not disjoint, we often had multiple specialists covering a particular image class. Table 4 shows the number of test set examples, the change in the number of examples correct at position 1 when using the specialist(s), and the rela- tive percentage improvement in top1 accuracy for the JFT dataset broken down by the number of specialists covering the class. We are encouraged by the general trend that accuracy improvements are larger when we have more specialists covering a particular class, since training independent specialist models is very easy to parallelize. # 6 Soft Targets as Regularizers
1503.02531#26
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
27
# 6 Soft Targets as Regularizers One of our main claims about using soft targets instead of hard targets is that a lot of helpful infor- mation can be carried in soft targets that could not possibly be encoded with a single hard target. In this section we demonstrate that this is a very large effect by using far less data to fit the 85M pa- rameters of the baseline speech model described earlier. Table 5 shows that with only 3% of the data (about 20M examples), training the baseline model with hard targets leads to severe overfitting (we did early stopping, as the accuracy drops sharply after reaching 44.5%), whereas the same model trained with soft targets is able to recover almost all the information in the full training set (about 2% shy). It is even more remarkable to note that we did not have to do early stopping: the system with soft targets simply “converged” to 57%. This shows that soft targets are a very effective way of communicating the regularities discovered by a model trained on all of the data to another model. 7 Test Frame Accuracy 58.9% 44.5% 57.0% System & training set Baseline (100% of training set) Baseline (3% of training set) Soft Targets (3% of training set) Train Frame Accuracy 63.4% 67.3% 65.4%
1503.02531#27
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
28
Table 5: Soft targets allow a new model to generalize well from only 3% of the training set. The soft targets are obtained by training on the full training set. 6.1 Using soft targets to prevent specialists from overfitting The specialists that we used in our experiments on the JFT dataset collapsed all of their non-specialist classes into a single dustbin class. If we allow specialists to have a full softmax over all classes, there may be a much better way to prevent them overfitting than using early stopping. A specialist is trained on data that is highly enriched in its special classes. This means that the effective size of its training set is much smaller and it has a strong tendency to overfit on its special classes. This problem cannot be solved by making the specialist a lot smaller because then we lose the very helpful transfer effects we get from modeling all of the non-specialist classes. Our experiment using 3% of the speech data strongly suggests that if a specialist is initialized with the weights of the generalist, we can make it retain nearly all of its knowledge about the non-special classes by training it with soft targets for the non-special classes in addition to training it with hard targets. The soft targets can be provided by the generalist. We are currently exploring this approach. # 7 Relationship to Mixtures of Experts
1503.02531#28
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
29
# 7 Relationship to Mixtures of Experts The use of specialists that are trained on subsets of the data has some resemblance to mixtures of experts [6] which use a gating network to compute the probability of assigning each example to each expert. At the same time as the experts are learning to deal with the examples assigned to them, the gating network is learning to choose which experts to assign each example to based on the relative discriminative performance of the experts for that example. Using the discriminative performance of the experts to determine the learned assignments is much better than simply clustering the input vectors and assigning an expert to each cluster, but it makes the training hard to parallelize: First, the weighted training set for each expert keeps changing in a way that depends on all the other experts and second, the gating network needs to compare the performance of different experts on the same example to know how to revise its assignment probabilities. These difficulties have meant that mixtures of experts are rarely used in the regime where they might be most beneficial: tasks with huge datasets that contain distinctly different subsets.
1503.02531#29
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
30
It is much easier to parallelize the training of multiple specialists. We first train a generalist model and then use the confusion matrix to define the subsets that the specialists are trained on. Once these subsets have been defined the specialists can be trained entirely independently. At test time we can use the predictions from the generalist model to decide which specialists are relevant and only these specialists need to be run. # 8 Discussion We have shown that distilling works very well for transferring knowledge from an ensemble or from a large highly regularized model into a smaller, distilled model. On MNIST distillation works remarkably well even when the transfer set that is used to train the distilled model lacks any examples of one or more of the classes. For a deep acoustic model that is version of the one used by Android voice search, we have shown that nearly all of the improvement that is achieved by training an ensemble of deep neural nets can be distilled into a single neural net of the same size which is far easier to deploy.
1503.02531#30
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
31
For really big neural networks, it can be infeasible even to train a full ensemble, but we have shown that the performance of a single really big net that has been trained for a very long time can be signif- icantly improved by learning a large number of specialist nets, each of which learns to discriminate between the classes in a highly confusable cluster. We have not yet shown that we can distill the knowledge in the specialists back into the single large net. 8 # Acknowledgments We thank Yangqing Jia for assistance with training models on ImageNet and Ilya Sutskever and Yoram Singer for helpful discussions. # References [1] C. Buciluˇa, R. Caruana, and A. Niculescu-Mizil. Model compression. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’06, pages 535–541, New York, NY, USA, 2006. ACM. [2] J. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Y. Ng. Large scale distributed deep networks. In NIPS, 2012.
1503.02531#31
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
32
[3] T. G. Dietterich. Ensemble methods in machine learning. In Multiple classifier systems, pages 1–15. Springer, 2000. [4] G. E. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N Sainath, and B. Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97, 2012. [5] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. proving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580, 2012. Im- arXiv preprint [6] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive mixtures of local experts. Neural computation, 3(1):79–87, 1991.
1503.02531#32
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.02531
33
[7] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097–1105, 2012. [8] J. Li, R. Zhao, J. Huang, and Y. Gong. Learning small-size dnn with output-distribution-based criteria. In Proceedings Interspeech 2014, pages 1910–1914, 2014. [9] N. Srivastava, G.E. Hinton, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958, 2014. 9
1503.02531#33
Distilling the Knowledge in a Neural Network
A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.
http://arxiv.org/pdf/1503.02531
Geoffrey Hinton, Oriol Vinyals, Jeff Dean
stat.ML, cs.LG, cs.NE
NIPS 2014 Deep Learning Workshop
null
stat.ML
20150309
20150309
[]
1503.00075
0
5 1 0 2 y a M 0 3 ] L C . s c [ 3 v 5 7 0 0 0 . 3 0 5 1 : v i X r a # Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks Kai Sheng Tai, Richard Socher*, Christopher D. Manning Computer Science Department, Stanford University, *MetaMind Inc. [email protected], [email protected], [email protected] # Abstract Because of their superior ability to pre- serve sequence information over time, Long Short-Term Memory (LSTM) net- works, a type of recurrent neural net- work with a more complex computational unit, have obtained strong results on a va- riety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syn- tactic properties that would naturally com- bine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree- LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Senti- ment Treebank).
1503.00075#0
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
1
y1 y2 y3 y4 x1 x2 y1 x3 x4 y2 y3 x1 y4 y6 x2 x4 x5 x6 Figure 1: Top: A chain-structured LSTM net- work. Bottom: A tree-structured LSTM network with arbitrary branching factor. # Introduction Most models for distributed representations of phrases and sentences—that is, models where real- valued vectors are used to represent meaning—fall into one of three classes: bag-of-words models, sequence models, and tree-structured models. In bag-of-words models, phrase and sentence repre- sentations are independent of word order; for ex- ample, they can be generated by averaging con- stituent word representations (Landauer and Du- mais, 1997; Foltz et al., 1998). In contrast, se- quence models construct sentence representations as an order-sensitive function of the sequence of tokens (Elman, 1990; Mikolov, 2012). Lastly, tree-structured models compose each phrase and sentence representation from its constituent sub- phrases according to a given syntactic structure over the sentence (Goller and Kuchler, 1996; Socher et al., 2011).
1503.00075#1
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
2
Order-insensitive models are insufficient to fully capture the semantics of natural language due to their inability to account for differences in meaning as a result of differences in word order or syntactic structure (e.g., “cats climb trees” vs. “trees climb cats”). We therefore turn to order- sensitive sequential or tree-structured models. In particular, tree-structured models are a linguisti- cally attractive option due to their relation to syn- tactic interpretations of sentence structure. A nat- ural question, then, is the following: to what ex- tent (if at all) can we do better with tree-structured models as opposed to sequential models for sen- tence representation? In this paper, we work to- wards addressing this question by directly com- paring a type of sequential model that has recently been used to achieve state-of-the-art results in sev- eral NLP tasks against its tree-structured general- ization. Due to their capability for processing arbitrary- recurrent neural networks
1503.00075#2
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
3
Due to their capability for processing arbitrary- recurrent neural networks (RNNs) are a natural choice for sequence model- ing tasks. Recently, RNNs with Long Short-Term Memory (LSTM) units (Hochreiter and Schmid- huber, 1997) have re-emerged as a popular archi- tecture due to their representational power and ef- fectiveness at capturing long-term dependencies. LSTM networks, which we review in Sec. 2, have been successfully applied to a variety of sequence modeling and prediction tasks, notably machine translation (Bahdanau et al., 2014; Sutskever et al., 2014), speech recognition (Graves et al., 2013), image caption generation (Vinyals et al., 2014), and program execution (Zaremba and Sutskever, 2014).
1503.00075#3
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
4
In this paper, we introduce a generalization of the standard LSTM architecture to tree-structured network topologies and show its superiority for representing sentence meaning over a sequential LSTM. While the standard LSTM composes its hidden state from the input at the current time step and the hidden state of the LSTM unit in the previous time step, the tree-structured LSTM, or Tree-LSTM, composes its state from an input vec- tor and the hidden states of arbitrarily many child units. The standard LSTM can then be considered a special case of the Tree-LSTM where each inter- nal node has exactly one child. In our evaluations, we demonstrate the empiri- cal strength of Tree-LSTMs as models for repre- senting sentences. We evaluate the Tree-LSTM architecture on two tasks: semantic relatedness prediction on sentence pairs and sentiment clas- sification of sentences drawn from movie reviews. Our experiments show that Tree-LSTMs outper- form existing systems and sequential LSTM base- lines on both tasks. Implementations of our mod- els and experiments are available at https:// github.com/stanfordnlp/treelstm. # 2 Long Short-Term Memory Networks # 2.1 Overview
1503.00075#4
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
5
# 2 Long Short-Term Memory Networks # 2.1 Overview Recurrent neural networks (RNNs) are able to pro- cess input sequences of arbitrary length via the re- cursive application of a transition function on a hidden state vector ht. At each time step t, the hidden state ht is a function of the input vector xt that the network receives at time t and its previous hidden state ht−1. For example, the input vector xt could be a vector representation of the t-th word in body of text (Elman, 1990; Mikolov, 2012). The hidden state ht ∈ Rd can be interpreted as a ddimensional distributed representation of the se- quence of tokens observed up to time t. Commonly, the RNN transition function is an affine transformation followed by a pointwise non- linearity such as the hyperbolic tangent function: ht = tanh (W xt + U ht−1 + b) . Unfortunately, a problem with RNNs with transi- tion functions of this form is that during training, components of the gradient vector can grow or de- cay exponentially over long sequences (Hochre- iter, 1998; Bengio et al., 1994). This problem with exploding or vanishing gradients makes it difficult for the RNN model to learn long-distance correla- tions in a sequence.
1503.00075#5
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
6
and Schmidhuber, 1997) addresses this problem of learning long-term dependencies by introducing a memory cell that is able to preserve state over long periods of time. While numerous LSTM variants have been described, here we describe the version used by Zaremba and Sutskever (2014). We define the LSTM unit at each time step t to be a collection of vectors in Rd: an input gate it, a forget gate ft, an output gate ot, a memory cell ct and a hidden state ht. The entries of the gating vectors it, ft and ot are in [0, 1]. We refer to d as the memory dimension of the LSTM. The LSTM transition equations are the follow- ing: =o (wx, +UOK 4+ o) ; (1) fi O.=o (Wa: +U Oh 1 + o) ; o (Wax, 40h +t oP) ut = tanh (Wa, +UOM 1 + o) ; o¢=4Out frOa-, hy = 04 © tanh(c;),
1503.00075#6
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
7
where «x; is the input at the current time step, a de- notes the logistic sigmoid function and © denotes elementwise multiplication. Intuitively, the for- get gate controls the extent to which the previous memory cell is forgotten, the input gate controls how much each unit is updated, and the output gate controls the exposure of the internal memory state. The hidden state vector in an LSTM unit is there- fore a gated, partial view of the state of the unit’s internal memory cell. Since the value of the gating variables vary for each vector element, the model can learn to represent information over multiple time scales. # 2.2 Variants Two commonly-used variants of the basic LSTM architecture are the Bidirectional LSTM and the Multilayer LSTM (also known as the stacked or deep LSTM). Bidirectional LSTM. A Bidirectional LSTM (Graves et al., 2013) consists of two LSTMs that are run in parallel: one on the input sequence and the other on the reverse of the input sequence. At each time step, the hidden state of the Bidirec- tional LSTM is the concatenation of the forward and backward hidden states. This setup allows the hidden state to capture both past and future infor- mation.
1503.00075#7
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
8
Multilayer LSTM. In Multilayer LSTM archi- tectures, the hidden state of an LSTM unit in layer £is used as input to the LSTM unit in layer 0+ 1 in the same time step (Graves et al.| {2013} [Sutskever! 2014). Here, the idea is to let the higher layers capture longer- term dependencies of the input sequence. These two variants can be combined as a Multi- layer Bidirectional LSTM (Graves et al., 2013). # 3 Tree-Structured LSTMs A limitation of the LSTM architectures described in the previous section is that they only allow for strictly sequential information propagation. Here, we propose two natural extensions to the basic the Child-Sum Tree-LSTM LSTM architecture: and the N-ary Tree-LSTM. Both variants allow for richer network topologies where each LSTM unit is able to incorporate information from multiple child units.
1503.00075#8
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
9
As in standard LSTM units, each Tree-LSTM unit (indexed by j) contains input and output gates ij and oj, a memory cell cj and hidden state hj. The difference between the standard LSTM unit and Tree-LSTM units is that gating vectors and memory cell updates are dependent on the states of possibly many child units. Ad- ditionally, instead of a single forget gate, the Tree- LSTM unit contains one forget gate fjk for each child k. This allows the Tree-LSTM unit to se- lectively incorporate information from each child. For example, a Tree-LSTM model can learn to em- phasize semantic heads in a semantic relatedness h2 c2 f2 x1 u1 i1 c1 o1 h1 f3 h3 c3 Figure 2: Composing the memory cell c1 and hid- den state h1 of a Tree-LSTM unit with two chil- dren (subscripts 2 and 3). Labeled edges cor- respond to gating by the indicated gating vector, with dependencies omitted for compactness. task, or it can learn to preserve the representation of sentiment-rich children for sentiment classifica- tion.
1503.00075#9
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
10
task, or it can learn to preserve the representation of sentiment-rich children for sentiment classifica- tion. As with the standard LSTM, each Tree-LSTM unit takes an input vector xj. In our applications, each xj is a vector representation of a word in a sentence. The input word at each node depends on the tree structure used for the network. For in- stance, in a Tree-LSTM over a dependency tree, each node in the tree takes the vector correspond- ing to the head word as input, whereas in a Tree- LSTM over a constituency tree, the leaf nodes take the corresponding word vectors as input. # 3.1 Child-Sum Tree-LSTMs Given a tree, let C(j) denote the set of children of node j. The Child-Sum Tree-LSTM transition equations are the following: ˜hj = hk, (2) # keC(j) (WOx,; + UM; +0), (Wax; +UOhy + oP) (Wx; 4UM% I; + v°)) ij = σ (3) fjk = σ , (4) oj = σ , (5) uj = tanh (Wa; + UM; + o) ; (6)
1503.00075#10
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
11
fjk = σ , (4) oj = σ , (5) uj = tanh (Wa; + UM; + o) ; (6) Gs ij Ouj + S- Sik © Ck, (7) keC(3) hj = 0; © tanh(c;), (8) where in Eq. 4, k ∈ C(j). Intuitively, we can interpret each parameter ma- trix in these equations as encoding correlations be- tween the component vectors of the Tree-LSTM unit, the input xj, and the hidden states hk of the unit’s children. For example, in a dependency tree application, the model can learn parameters W (i) such that the components of the input gate ij have values close to 1 (i.e., “open”) when a semanti- cally important content word (such as a verb) is given as input, and values close to 0 (i.e., “closed”) when the input is a relatively unimportant word (such as a determiner).
1503.00075#11
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
12
Dependency Tree-LSTMs. Since the Child- Sum Tree-LSTM unit conditions its components on the sum of child hidden states hk, it is well- suited for trees with high branching factor or whose children are unordered. For example, it is a good choice for dependency trees, where the num- ber of dependents of a head can be highly variable. We refer to a Child-Sum Tree-LSTM applied to a dependency tree as a Dependency Tree-LSTM. # 3.2 N -ary Tree-LSTMs The N -ary Tree-LSTM can be used on tree struc- tures where the branching factor is at most N and where children are ordered, i.e., they can be in- dexed from 1 to N . For any node j, write the hid- den state and memory cell of its kth child as hjk and cjk respectively. The N -ary Tree-LSTM tran- sition equations are the following: N ij =o (ws, + OUP hie + 0) nC) f=1 N fir =o (wire + Slug ke hie + of ») ; (=1
1503.00075#12
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
13
N ij =o (ws, + OUP hie + 0) nC) f=1 N fir =o (wire + Slug ke hie + of ») ; (=1 N oj =O (wins + S- VO age + so) , C1) f=1 N uj = tanh (wees + S- UV hye + i) ; f=1 (9) (10) model to learn more fine-grained conditioning on the states of a unit’s children than the Child- Sum Tree-LSTM. Consider, for example, a con- stituency tree application where the left child of a node corresponds to a noun phrase, and the right child to a verb phrase. Suppose that in this case it is advantageous to emphasize the verb phrase in the representation. Then the U;, {f ) parameters can be trained such that the components of f;1 are close to 0 (i.e., “forget”), while the components of fj2 are close to | (i.e., “preserve”).
1503.00075#13
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
14
Forget gate parameterization. In Eq. we define a parameterization of the Ath child’s for- get gate fj, that contains “off-diagonal” param- eter matrices ul ), k # €. This parameteriza- tion allows for more flexible control of informa- tion propagation from child to parent. For exam- ple, this allows the left hidden state in a binary tree to have either an excitatory or inhibitory effect on the forget gate of the right child. However, for large values of N, these additional parameters are impractical and may be tied or fixed to zero. Constituency Tree-LSTMs. We can naturally apply Binary Tree-LSTM units to binarized con- stituency trees since left and right child nodes are distinguished. We refer to this application of Bi- nary Tree-LSTMs as a Constituency Tree-LSTM. Note that in Constituency Tree-LSTMs, a node j receives an input vector xj only if it is a leaf node.
1503.00075#14
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
15
In the remainder of this paper, we focus on the special cases of Dependency Tree-LSTMs and Constituency Tree-LSTMs. These architectures are in fact closely related; since we consider only binarized constituency trees, the parameterizations of the two models are very similar. The key dif- ference is in the application of the compositional parameters: dependent vs. head for Dependency Tree-LSTMs, and left child vs. right child for Con- stituency Tree-LSTMs. (12) N oy = Ou +d) fie O eye, (13) f=1 f=1 hj = 0; © tanh(c;), (14) where in Eq. 10, k = 1, 2, . . . , N . Note that when the tree is simply a chain, both Eqs. 2–8 and Eqs. 9–14 reduce to the standard LSTM tran- sitions, Eqs. 1. The introduction of separate parameter matri- ces for each child k allows the N -ary Tree-LSTM # 4 Models We now describe two specific models that apply the Tree-LSTM architectures described in the pre- vious section. # 4.1 Tree-LSTM Classification
1503.00075#15
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
16
# 4.1 Tree-LSTM Classification In this setting, we wish to predict labels ˆy from a discrete set of classes Y for some subset of nodes in a tree. For example, the label for a node in a parse tree could correspond to some property of the phrase spanned by that node. At each node j, we use a softmax classifier to predict the label yj; given the inputs {7}, observed at nodes in the subtree rooted at 7. The classifier takes the hidden state h; at the node as input: doly | {x};) = softmax (Wh, + b) ; yj = arg max Pg (y | {2},) The cost function is the negative log-likelihood of the true class labels y(k) at each labeled node: 1S . _ ti pal ah) | tor) 1. Away J(0) = —-— > losio(y | {x} ) + 5llll, where m is the number of labeled nodes in the training set, the superscript k indicates the kth la- beled node, and λ is an L2 regularization hyperpa- rameter. # 4.2 Semantic Relatedness of Sentence Pairs
1503.00075#16
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
17
# 4.2 Semantic Relatedness of Sentence Pairs Given a sentence pair, we wish to predict a real-valued similarity score in some range [1, K], where K > 1 is an integer. The sequence {1, 2, . . . , K} is some ordinal scale of similarity, where higher scores indicate greater degrees of similarity, and we allow real-valued scores to ac- count for ground-truth ratings that are an average over the evaluations of several human annotators. We first produce sentence representations hL and hR for each sentence in the pair using a Tree-LSTM model over each sentence’s parse tree. Given these sentence representations, we predict the similarity score ˆy using a neural network that considers both the distance and angle between the pair (hL, hR): hy =hy hr, (15) hy =|ho— hal, hs =o (Wn tWOh, + vo”) ; pe = softmax (w) he + To) 9=1" bo, where rT = [1 2 . . . K] and the absolute value function is applied elementwise. The use of both distance measures h× and h+ is empirically mo- tivated: we find that the combination outperforms the use of either measure alone. The multiplicative measure h× can be interpreted as an elementwise
1503.00075#17
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
18
comparison of the signs of the input representa- tions. We want the expected rating under the predicted distribution ˆpθ given model parameters θ to be close to the gold rating y ∈ [1, K]: ˆy = rT ˆpθ ≈ y. We therefore define a sparse target distribution1 p that satisfies y = rT p: y— lyl, t=ly)+1 p= )lyl-yt+l, t=(y 0 otherwise for 1 ≤ i ≤ K. The cost function is the regular- ized KL-divergence between p and ˆpθ: _ (k) |] oY) Angie 100) = 5, BLP | a) + 51Al8. where m is the number of training pairs and the superscript k indicates the kth sentence pair. # 5 Experiments We evaluate our Tree-LSTM architectures on two tasks: (1) sentiment classification of sentences sampled from movie reviews and (2) predicting the semantic relatedness of sentence pairs. In comparing our Tree-LSTMs against sequen- tial LSTMs, we control for the number of LSTM parameters by varying the dimensionality of the hidden states2. Details for each model variant are summarized in Table 1.
1503.00075#18
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
19
# 5.1 Sentiment Classification In this task, we predict the sentiment of sen- tences sampled from movie reviews. We use the Stanford Sentiment Treebank (Socher et al., 2013). There are two subtasks: binary classifica- tion of sentences, and fine-grained classification over five classes: very negative, negative, neu- tral, positive, and very positive. We use the stan- dard train/dev/test splits of 6920/872/1821 for the binary classification subtask and 8544/1101/2210 for the fine-grained classification subtask (there are fewer examples for the binary subtask since 1In the subsequent experiments, we found that optimizing this objective yielded better performance than a mean squared error objective. 2For our Bidirectional LSTMs, the parameters of the for- ward and backward transition functions are shared. In our experiments, this achieved superior performance to Bidirec- tional LSTMs with untied weights and the same number of parameters (and therefore smaller hidden vector dimension- ality).
1503.00075#19
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
20
Relatedness Sentiment LSTM Variant d |θ| d |θ| Standard Bidirectional 2-layer Bidirectional 2-layer Constituency Tree Dependency Tree 150 150 108 108 142 150 203,400 203,400 203,472 203,472 205,190 203,400 168 168 120 120 150 168 315,840 315,840 318,720 318,720 316,800 315,840 Table 1: Memory dimensions d and composition function parameter counts |θ| for each LSTM vari- ant that we evaluate. neutral sentences are excluded). Standard bina- rized constituency parse trees are provided for each sentence in the dataset, and each node in these trees is annotated with a sentiment label. For the sequential LSTM baselines, we predict the sentiment of a phrase using the representation given by the final LSTM hidden state. The sequen- tial LSTM models are trained on the spans corre- sponding to labeled nodes in the training set.
1503.00075#20
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
21
We use the classification model described in Sec. 4.1 with both Dependency Tree-LSTMs (Sec. and Constituency Tree-LSTMs (Sec. 3.2). The Constituency Tree-LSTMs are structured according to the provided parse trees. For the Dependency Tree-LSTMs, we produce dependency parses3 of each sentence; each node in a tree is given a sentiment label if its span matches a labeled span in the training set. # 5.2 Semantic Relatedness For a given pair of sentences, the semantic relat- edness task is to predict a human-generated rating of the similarity of the two sentences in meaning. We use the Sentences Involving Composi- tional Knowledge (SICK) dataset (Marelli et al., 2014), consisting of 9927 sentence pairs in a 4500/500/4927 train/dev/test split. The sentences are derived from existing image and video descrip- tion datasets. Each sentence pair is annotated with a relatedness score y ∈ [1, 5], with 1 indicating that the two sentences are completely unrelated, and 5 indicating that the two sentences are very related. Each label is the average of 10 ratings as- signed by different human annotators.
1503.00075#21
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
23
Method Fine-grained Binary RAE (Socher et al., 2013) MV-RNN (Socher et al., 2013) RNTN (Socher et al., 2013) DCNN (Blunsom et al., 2014) Paragraph-Vec (Le and Mikolov, 2014) CNN-non-static (Kim, 2014) CNN-multichannel (Kim, 2014) DRNN (Irsoy and Cardie, 2014) 43.2 44.4 45.7 48.5 48.7 48.0 47.4 49.8 82.4 82.9 85.4 86.8 87.8 87.2 88.1 86.6 LSTM Bidirectional LSTM 2-layer LSTM 2-layer Bidirectional LSTM 46.4 (1.1) 49.1 (1.0) 46.0 (1.3) 48.5 (1.0) 84.9 (0.6) 87.5 (0.5) 86.3 (0.6) 87.2 (1.0) Dependency Tree-LSTM Constituency Tree-LSTM 48.4 (0.4) 85.7 (0.4) – randomly initialized vectors – Glove vectors, fixed – Glove vectors, tuned 43.9
1503.00075#23
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
25
Table 2: Test set accuracies on the Stanford Sen- timent Treebank. For our experiments, we report mean accuracies over 5 runs (standard deviations in parentheses). Fine-grained: 5-class sentiment classification. Binary: positive/negative senti- ment classification. produce binarized constituency parses4 and depen- dency parses of the sentences in the dataset for our Constituency Tree-LSTM and Dependency Tree- LSTM models. # 5.3 Hyperparameters and Training Details The hyperparameters for our models were tuned on the development set for each task. We initialized our word representations using publicly available 300-dimensional Glove vec- tors5 (Pennington et al., 2014). For the sentiment classification task, word representations were up- dated during training with a learning rate of 0.1. For the semantic relatedness task, word represen- tations were held fixed as we did not observe any significant improvement when the representations were tuned.
1503.00075#25
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
26
Our models were trained using AdaGrad (Duchi et al., 2011) with a learning rate of 0.05 and a minibatch size of 25. The model parameters were regularized with a per-minibatch L2 regularization strength of 10−4. The sentiment classifier was ad- ditionally regularized using dropout (Hinton et al., 2012) with a dropout rate of 0.5. We did not ob- serve performance gains using dropout on the se- mantic relatedness task. 4Constituency parses produced by the Stanford PCFG Parser (Klein and Manning, 2003). 5Trained on 840 billion tokens of Common Crawl data, http://nlp.stanford.edu/projects/glove/.
1503.00075#26
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
27
Method Pearson’s r Spearman’s ρ MSE Illinois-LH (Lai and Hockenmaier, 2014) UNAL-NLP (Jimenez et al., 2014) Meaning Factory (Bjerva et al., 2014) ECNU (Zhao et al., 2014) 0.7993 0.8070 0.8268 0.8414 0.7538 0.7489 0.7721 – 0.3692 0.3550 0.3224 – Mean vectors DT-RNN (Socher et al., 2014) SDT-RNN (Socher et al., 2014) 0.7577 (0.0013) 0.7923 (0.0070) 0.7900 (0.0042) 0.6738 (0.0027) 0.7319 (0.0071) 0.7304 (0.0076) 0.4557 (0.0090) 0.3822 (0.0137) 0.3848 (0.0074) LSTM Bidirectional LSTM 2-layer LSTM 2-layer Bidirectional LSTM 0.8528 (0.0031) 0.8567 (0.0028) 0.8515 (0.0066) 0.8558 (0.0014)
1503.00075#27
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
29
Table 3: Test set results on the SICK semantic relatedness subtask. For our experiments, we report mean scores over 5 runs (standard deviations in parentheses). Results are grouped as follows: (1) SemEval 2014 submissions; (2) Our own baselines; (3) Sequential LSTMs; (4) Tree-structured LSTMs. # 6 Results # 6.1 Sentiment Classification tion metrics. The first two metrics are measures of correlation against human evaluations of semantic relatedness. Our results are summarized in Table 2. The Con- stituency Tree-LSTM outperforms existing sys- tems on the fine-grained classification subtask and achieves accuracy comparable to the state-of-the- art on the binary subtask. In particular, we find that it outperforms the Dependency Tree-LSTM. This performance gap is at least partially attributable to the fact that the Dependency Tree-LSTM is trained on less data: about 150K labeled nodes vs. 319K for the Constituency Tree-LSTM. This difference is due to (1) the dependency representations con- taining fewer nodes than the corresponding con- stituency representations, and (2) the inability to match about 9% of the dependency nodes to a cor- responding span in the training data.
1503.00075#29
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
30
We found that updating the word representa- tions during training (“fine-tuning” the word em- bedding) yields a significant boost in performance on the fine-grained classification subtask and gives a minor gain on the binary classification subtask (this finding is consistent with previous work on this task by Kim (2014)). These gains are to be expected since the Glove vectors used to initial- ize our word representations were not originally trained to capture sentiment. We compare our models against a number of non-LSTM baselines. The mean vector baseline computes sentence representations as a mean of the representations of the constituent words. The DT-RNN and SDT-RNN models (Socher et al., 2014) both compose vector representations for the nodes in a dependency tree as a sum over affine- transformed child vectors, followed by a nonlin- earity. The SDT-RNN is an extension of the DT- RNN that uses a separate transformation for each dependency relation. For each of our baselines, including the LSTM models, we use the similarity model described in Sec. 4.2.
1503.00075#30
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
31
We also compare against four of the top- performing systems6 submitted to the SemEval 2014 semantic relatedness shared task: ECNU (Zhao et al., 2014), The Meaning Factory (Bjerva et al., 2014), UNAL-NLP (Jimenez et al., 2014), and Illinois-LH (Lai and Hockenmaier, 2014). These systems are heavily feature engineered, generally using a combination of surface form overlap features and lexical distance features de- rived from WordNet or the Paraphrase Database (Ganitkevitch et al., 2013). Our LSTM models outperform all these sys# 6.2 Semantic Relatedness Our results are summarized in Table 3. Following Marelli et al. (2014), we use Pearson’s r, Spear- man’s ρ and mean squared error (MSE) as evalua6We list the strongest results we were able to find for this task; in some cases, these results are stronger than the official performance by the team on the shared task. For example, the listed result by Zhao et al. (2014) is stronger than their submitted system’s Pearson correlation score of 0.8280.
1503.00075#31
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
32
0.70 T T T 0.65 FA 0.60 F pm 0.55 - 0.50 F accurac 0.45 F A DT-LSTM 0.40 Hey crisTM 0.35 H*—* LST : bed >» Bi-LSTM 0.30 : 1 > 1 0 5 10 15 20 25 30 35 40 45 sentence length Figure 3: Fine-grained sentiment classification ac- curacy vs. sentence length. For each @, we plot accuracy for the test set sentences with length in the window [¢ — 2,¢ + 2]. Examples in the tail of the length distribution are batched in the final window (¢ = 45). tems without any additional feature engineering, with the best results achieved by the Dependency Tree-LSTM. Recall that in this task, both Tree- LSTM models only receive supervision at the root of the tree, in contrast to the sentiment classifi- cation task where supervision was also provided at the intermediate nodes. We conjecture that in this setting, the Dependency Tree-LSTM benefits from its more compact structure relative to the Constituency Tree-LSTM, in the sense that paths from input word vectors to the root of the tree are shorter on aggregate for the Dependency Tree- LSTM. # 7 Discussion and Qualitative Analysis # 7.1 Modeling Semantic Relatedness
1503.00075#32
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
33
# 7 Discussion and Qualitative Analysis # 7.1 Modeling Semantic Relatedness In Table 4, we list nearest-neighbor sentences re- trieved from a 1000-sentence sample of the SICK test set. We compare the neighbors ranked by the Dependency Tree-LSTM model against a baseline ranking by cosine similarity of the mean word vec- tors for each sentence. The Dependency Tree-LSTM model exhibits several desirable properties. Note that in the de- pendency parse of the second query sentence, the word “ocean” is the second-furthest word from the root (“waving”), with a depth of 4. Regardless, the retrieved sentences are all semantically related to the word “ocean”, which indicates that the Tree- LSTM is able to both preserve and emphasize in- formation from relatively distant nodes. Addi- tionally, the Tree-LSTM model shows greater roaA DTLSTM wy CT-LSTM 0.807} —. LSTM >» Bi-LSTM 0.78 r - i i L L L 4 6 8 10 12 4 16 18 20 mean sentence length
1503.00075#33
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
34
Figure 4: Pearson correlations r between pre- dicted similarities and gold ratings vs. sentence length. For each @, we plot r for the pairs with mean length in the window [¢—2, ¢+2]. Examples in the tail of the length distribution are batched in the final window (¢ = 18.5). bustness to differences in sentence length. Given the query “two men are playing guitar”, the Tree- LSTM associates the phrase “playing guitar” with the longer, related phrase “dancing and singing in front of a crowd” (note as well that there is zero token overlap between the two phrases). # 7.2 Effect of Sentence Length One hypothesis to explain the empirical strength of Tree-LSTMs is that tree structures help miti- gate the problem of preserving state over long se- quences of words. If this were true, we would ex- pect to see the greatest improvement over sequen- tial LSTMs on longer sentences. In Figs. 3 and 4, we show the relationship between sentence length and performance as measured by the relevant task- specific metric. Each data point is a mean score over 5 runs, and error bars have been omitted for clarity.
1503.00075#34
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
35
We observe that while the Dependency Tree- LSTM does significantly outperform its sequen- tial counterparts on the relatedness task for longer sentences of length 13 to 15 (Fig. 4), it also achieves consistently strong performance on shorter sentences. This suggests that unlike se- quential LSTMs, Tree-LSTMs are able to encode semantically-useful structural information in the sentence representations that they compose. # 8 Related Work Distributed representations of words (Rumelhart et al., 1988; Collobert et al., 2011; Turian et al., 2010; Huang et al., 2012; Mikolov et al., 2013;
1503.00075#35
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
36
Ranking by mean word vector cosine similarity Score Ranking by Dependency Tree-LSTM model Score a woman is slicing potatoes a woman is cutting potatoes a woman is slicing herbs a woman is slicing tofu 0.96 0.92 0.92 a woman is slicing potatoes a woman is cutting potatoes potatoes are being sliced by a woman tofu is being sliced by a woman 4.82 4.70 4.39 a boy is waving at some young runners from the ocean a man and a boy are standing at the bottom of some stairs , 0.92 a boy is waving at some young runners from the ocean a group of men is playing with a ball on the beach 3.79 which are outdoors a group of children in uniforms is standing at a gate and 0.90 a young boy wearing a red swimsuit is jumping out of a 3.37 one is kissing the mother blue kiddies pool a group of children in uniforms is standing at a gate and 0.90 the man is tossing a kid into the swimming pool that is 3.19 there is no one kissing the mother near the ocean two men are playing guitar some men are playing rugby two men are talking 0.88 0.87 two men are playing guitar the man is singing and playing the guitar the man is opening the guitar for donations and plays 4.08 4.01 with the case two dogs are playing with each other 0.87 two men are dancing and singing in front of a crowd 4.00
1503.00075#36
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
37
Table 4: Most similar sentences from a 1000-sentence sample drawn from the SICK test set. The Tree- LSTM model is able to pick up on more subtle relationships, such as that between “beach” and “ocean” in the second example. Pennington et al., 2014) have found wide appli- cability in a variety of NLP tasks. Following this success, there has been substantial interest in the area of learning distributed phrase and sen- tence representations (Mitchell and Lapata, 2010; Yessenalina and Cardie, 2011; Grefenstette et al., 2013; Mikolov et al., 2013), as well as distributed representations of longer bodies of text such as paragraphs and documents (Srivastava et al., 2013; Le and Mikolov, 2014). and sentiment classification, outperforming exist- ing systems on both. Controlling for model di- mensionality, we demonstrated that Tree-LSTM models are able to outperform their sequential counterparts. Our results suggest further lines of work in characterizing the role of structure in pro- ducing distributed representations of sentences. # Acknowledgements
1503.00075#37
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
38
# Acknowledgements Our approach builds on recursive neural net- works (Goller and Kuchler, 1996; Socher et al., 2011), which we abbreviate as Tree-RNNs in or- der to avoid confusion with recurrent neural net- works. Under the Tree-RNN framework, the vec- tor representation associated with each node of a tree is composed as a function of the vectors corresponding to the children of the node. The choice of composition function gives rise to nu- merous variants of this basic framework. Tree- RNNs have been used to parse images of natu- ral scenes (Socher et al., 2011), compose phrase representations from word vectors (Socher et al., 2012), and classify the sentiment polarity of sen- tences (Socher et al., 2013).
1503.00075#38
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
39
We thank our anonymous reviewers for their valu- able feedback. Stanford University gratefully ac- knowledges the support of a Natural Language Understanding-focused gift from Google Inc. and the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Lab- oratory (AFRL) contract no. FA8750-13-2-0040. Any opinions, findings, and conclusion or recom- mendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the US government. # References # 9 Conclusion Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 . In this paper, we introduced a generalization of LSTMs to tree-structured network topologies. The Tree-LSTM architecture can be applied to trees with arbitrary branching factor. We demonstrated the effectiveness of the Tree-LSTM by applying the architecture in two tasks: semantic relatedness
1503.00075#39
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
40
Bengio, Yoshua, Patrice Simard, and Paolo Fras- coni. 1994. Learning long-term dependencies with gradient descent is difficult. IEEE Trans- actions on Neural Networks 5(2):157–166. Bjerva, Johannes, Johan Bos, Rob van der Goot, and Malvina Nissim. 2014. The Meaning Fac- tory: Formal semantics for recognizing textual entailment and determining semantic similarity. SemEval 2014 . Blunsom, Phil, Edward Grefenstette, Nal Kalch- brenner, et al. 2014. A convolutional neural net- work for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Chen, Danqi and Christopher D Manning. 2014. A fast and accurate dependency parser using neu- ral networks. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Lan- guage Processing (EMNLP). pages 740–750. Collobert, Ronan, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (al- most) from scratch. The Journal of Machine Learning Research 12:2493–2537.
1503.00075#40
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
41
Duchi, John, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learn- ing and stochastic optimization. The Journal of Machine Learning Research 12:2121–2159. Elman, Jeffrey L. 1990. Finding structure in time. Cognitive science 14(2):179–211. Foltz, Peter W, Walter Kintsch, and Thomas K Landauer. 1998. The measurement of textual coherence with latent semantic analysis. Dis- course processes 25(2-3):285–307. Ganitkevitch, Juri, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Para- phrase Database. In HLT-NAACL. pages 758– 764. Goller, Christoph and Andreas Kuchler. 1996. Learning task-dependent distributed representa- tions by backpropagation through structure. In IEEE International Conference on Neural Net- works. volume 1, pages 347–352. Graves, Alex, Navdeep Jaitly, and A-R Mohamed. 2013. Hybrid speech recognition with deep bidirectional LSTM. In IEEE Workshop on Au- tomatic Speech Recognition and Understanding (ASRU). pages 273–278.
1503.00075#41
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
42
Grefenstette, Edward, Georgiana Dinu, Yao- Zhong Zhang, Mehrnoosh Sadrzadeh, and Marco Baroni. 2013. Multi-step regression learning for compositional distributional se- mantics. arXiv preprint arXiv:1301.6939 . Hinton, Geoffrey E, Nitish Srivastava, Alex Ilya Sutskever, and Ruslan R Krizhevsky, Salakhutdinov. 2012. Improving neural net- works by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 . Hochreiter, Sepp. 1998. The vanishing gradient problem during learning recurrent neural nets and problem solutions. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 6(02):107–116. Hochreiter, Sepp and J¨urgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computa- tion 9(8):1735–1780. Huang, Eric H., Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improv- ing word representations via global context and In Annual Meeting multiple word prototypes. of the Association for Computational Linguis- tics (ACL).
1503.00075#42
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
43
Irsoy, Ozan and Claire Cardie. 2014. Deep re- cursive neural networks for compositionality in In Advances in Neural Information language. Processing Systems. pages 2096–2104. Jimenez, Sergio, George Duenas, Julia Baquero, Alexander Gelbukh, Av Juan Dios B´atiz, and Av Mendiz´abal. 2014. UNAL-NLP: Combin- ing soft cardinality features for semantic textual similarity, relatedness and entailment. SemEval 2014 . Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882 . Klein, Dan and Christopher D Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1. Associa- tion for Computational Linguistics, pages 423– 430. Lai, Alice and Julia Hockenmaier. 2014. Illinois- lh: A denotational and distributional approach to semantics. SemEval 2014 . Landauer, Thomas K and Susan T Dumais. 1997. A solution to plato’s problem: The latent se- mantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review 104(2):211.
1503.00075#43
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
44
Le, Quoc V and Tomas Mikolov. 2014. Dis- tributed representations of sentences and doc- uments. arXiv preprint arXiv:1405.4053 . Marelli, Marco, Luisa Bentivogli, Marco Ba- roni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. 2014. SemEval-2014 Task 1: Evaluation of compositional distributional semantic models on full sentences through se- mantic relatedness and textual entailment. In SemEval 2014. Mikolov, Tom´aˇs. 2012. Statistical Language Mod- els Based on Neural Networks. Ph.D. thesis, Brno University of Technology. Mikolov, Tomas, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Infor- mation Processing Systems. pages 3111–3119. Mitchell, Jeff and Mirella Lapata. 2010. Composi- tion in distributional models of semantics. Cog- nitive science 34(8):1388–1429.
1503.00075#44
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
45
Mitchell, Jeff and Mirella Lapata. 2010. Composi- tion in distributional models of semantics. Cog- nitive science 34(8):1388–1429. Pennington, Jeffrey, Richard Socher, and Christo- pher D Manning. 2014. Glove: Global vectors for word representation. Proceedings of the Em- piricial Methods in Natural Language Process- ing (EMNLP 2014) 12. Rumelhart, David E, Geoffrey E Hinton, and Ronald J Williams. 1988. Learning represen- tations by back-propagating errors. Cognitive modeling 5. Socher, Richard, Brody Huval, Christopher D Manning, and Andrew Y Ng. 2012. Seman- tic compositionality through recursive matrix- vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Nat- ural Language Learning. Association for Com- putational Linguistics, pages 1201–1211. Socher, Richard, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computa- tional Linguistics 2:207–218.
1503.00075#45
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
46
Socher, Richard, Cliff C Lin, Chris Manning, and Andrew Y Ng. 2011. Parsing natural scenes and natural language with recursive neural net- works. In Proceedings of the 28th International Conference on Machine Learning (ICML-11). pages 129–136. Socher, Richard, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Re- cursive deep models for semantic composition- ality over a sentiment treebank. In Proceedings of the Conference on Empirical Methods in Nat- ural Language Processing (EMNLP). Srivastava, Nitish, Ruslan R Salakhutdinov, and Geoffrey E Hinton. 2013. Modeling documents with deep boltzmann machines. arXiv preprint arXiv:1309.6865 . Sutskever, Ilya, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neu- ral networks. In Advances in Neural Informa- tion Processing Systems. pages 3104–3112.
1503.00075#46
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1503.00075
47
Turian, Joseph, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and gen- In eral method for semi-supervised learning. Proceedings of the 48th annual meeting of the association for computational linguistics. As- sociation for Computational Linguistics, pages 384–394. Vinyals, Oriol, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2014. Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555 . Yessenalina, Ainur and Claire Cardie. 2011. Com- positional matrix-space models for sentiment In Proceedings of the Conference analysis. on Empirical Methods in Natural Language Processing. Association for Computational Lin- guistics, pages 172–182. Zaremba, Wojciech and 2014. Learning to execute. arXiv:1410.4615 . Ilya Sutskever. arXiv preprint Zhao, Jiang, Tian Tian Zhu, and Man Lan. 2014. ECNU: One stone two birds: Ensemble of het- erogenous measures for semantic relatedness and textual entailment. SemEval 2014 .
1503.00075#47
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).
http://arxiv.org/pdf/1503.00075
Kai Sheng Tai, Richard Socher, Christopher D. Manning
cs.CL, cs.AI, cs.LG
Accepted for publication at ACL 2015
null
cs.CL
20150228
20150530
[]
1502.06512
0
# From Seed AI to Technological Singularity via Recursively Self-Improving Software # Roman V. Yampolskiy Computer Engineering and Computer Science Speed School of Engineering University of Louisville [email protected] Abstract Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software. Keywords: Recursive self-improvement, self-modifying code, self-modifying software, self- modifying algorithm; Autogenous intelligence, Bootstrap fallacy;
1502.06512#0
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
1
Keywords: Recursive self-improvement, self-modifying code, self-modifying software, self- modifying algorithm; Autogenous intelligence, Bootstrap fallacy; 1. Introduction Since the early days of computer science, visionaries in the field anticipated creation of a self- improving intelligent system, frequently as an easier pathway to creation of true artificial intelligence. As early as 1950 Alan Turing wrote: “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child-brain is something like a notebook as one buys from the stationers. Rather little mechanism, and lots of blank sheets... Our hope is that there is so little mechanism in the child-brain that something like it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child” [1].
1502.06512#1
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]
1502.06512
2
Turing’s approach to creation of artificial (super)intelligence was echoed by I.J. Good, Marvin Minsky and John von Neumann, all three of whom published on it (interestingly in the same year, 1966): Good - “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make” [2]. Minsky - “Once we have devised programs with a genuine capacity for self- improvement a rapid evolutionary process will begin. As the machine improves both itself and its model of itself, we shall begin to see all the phenomena associated with the terms
1502.06512#2
From Seed AI to Technological Singularity via Recursively Self-Improving Software
Software capable of improving itself has been a dream of computer scientists since the inception of the field. In this work we provide definitions for Recursively Self-Improving software, survey different types of self-improving software, review the relevant literature, analyze limits on computation restricting recursive self-improvement and introduce RSI Convergence Theory which aims to predict general behavior of RSI systems. Finally, we address security implications from self-improving intelligent software.
http://arxiv.org/pdf/1502.06512
Roman V. Yampolskiy
cs.AI
null
null
cs.AI
20150223
20150223
[]