doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1506.01066 | 14 | so neeâ L ° tig vot etgpsteg ro v9 â nrg PM ot goog ata oy f° "er âgoigg not usetylat all NOt geRstinteregting at an âiste wept % ~ i increcy bad | mage 4 vy notugens = POTDIES âââamazinggy bad yn ing naety ys et L âqh 4 / much yorse so pad ve tegit | even getter i AM uch getter} i A 1 Zz - i L J âegos teally good wy pee bast Dayeâ got me oy ie oe na 9 be oneit \ oF smghe tery Nive 7 wondprtul Ee tena
Figure 2: t-SNE Visualization on latent representations for modiï¬cations and negations. | 1506.01066#14 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 15 | Figure 2: t-SNE Visualization on latent representations for modiï¬cations and negations.
_s hated! the moute apd it was 100 long gSpite the good acting, L {hated the mie: =~ _â â>, | hated the moute althoygh it hac! good acting much yorse Whoted the move buy it had good acting not gad not goog! at all not ugeful L very,pad not great so gad not interegting at all not usetylat all by incrediljy bad not good L hrarclly ysehul not integesting, amazingly bac disige âliked the movie althoygh it had bad! acting not pice im Hikea she mowe gt âwas 100 long Useful S nN tk oye love sqrmuch tke glot belter â much getter \ \ wonedgetul best anegnas SSN we dood oven getter e a L ited the owe angie good ating mtgptic lait . inctectibly good wy gee tenjfic âViked!thig movie co a alee sep teary goo se aged interegting [- amazingly good nige
# i
Figure 4: t-SNE Visualization for clause composition. | 1506.01066#15 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 16 | # i
Figure 4: t-SNE Visualization for clause composition.
Concessive Sentences In concessive sentences, two clauses have opposite polarities, usually re- lated by a contrary-to-expectation implicature. We plot evolving representations over time for two con- cessives in Figure 3. The plots suggest:
1. For tasks like sentiment analysis whose goal is to predict a speciï¬c semantic dimension (as op- posed to general tasks like language model word prediction), too large a dimensionality leads to many dimensions non-functional (with values close to 0), causing two sentences of opposite sentiment to differ only in a few dimensions. This may ex- plain why more dimensions donât necessarily lead to better performance on such tasks (For example, as reported in (Socher et al., 2013), optimal perfor- mance is achieved when word dimensionality is set to between 25 and 35).
2. Both sentences contain two clauses connected by the conjunction âthoughâ. Such two-clause sen- tences might either work collaborativelyâ models | 1506.01066#16 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 17 | 2. Both sentences contain two clauses connected by the conjunction âthoughâ. Such two-clause sen- tences might either work collaborativelyâ models
would remember the word âthoughâ and make the second clause share the same sentiment orienta- tion as ï¬rstâor competitively, with the stronger one dominating. The region within dotted line in Figure 3(a) favors the second assumption: the dif- ference between the two sentences is diluted when the ï¬nal words (âinterestingâ and âboringâ) appear.
Clause Composition In Figure 4 we explore this clause composition in more detail. Representations move closer to the negative sentiment region by adding negative clauses like âalthough it had bad actingâ or âbut it is too longâ to the end of a simply positive âI like the movieâ. By contrast, adding a concessive clause to a negative clause does not move toward the positive; âI hate X but ...â is still very negative, not that different than âI hate Xâ. This difference again suggests the model is able to capture negative asymmetry (Clark and Clark, 1977; Horn, 1989; Fraenkel and Schul, 2008). | 1506.01066#17 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 18 | _ 20 I hate | | . hate the = the movie *° movie hate the movie Recurrent ° ry 2 » Bi - Directional LSTM jozr
Figure 5: Saliency heatmap for for âI hate the movie .â Each row corresponds to saliency scores for the correspondent word representation with each grid representing each dimension.
0.032 10.200 I 0.45, I hat â po bate TT IETT TMI) the 028 oss = the 0.150 novie 0.020 030 movie 0.125 I oor 0.25 I 10.100 saw lo.or2 020 saw ors 015g last 0.008 last | 1.050 . 010 night night 008 ight soos . 0.05 . | ° 10 200 30 4050 2.000 ° 40. 20 30 40 «650 â0.00 o 10 20 30 40 «(80 9.000 Recurrent LSTM Bi- Directional LSTM
Figure 6: Saliency heatmap for âI hate the movie I saw last night .â . | 1506.01066#18 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 19 | Figure 6: Saliency heatmap for âI hate the movie I saw last night .â .
I I I 0.24 10.64 hate Pee 0 mel TL IML Hi) 0.08 oz ose the the the 4 oor . 0.18 . 0.48 movie movie movie | || 0.06 though | | I] though | | | ors â=| jo.40 0.05 the the o.r2 the los2 5 0.04 plot plot 0.09 a ome 03 s â0.02 is 0.08 ad 0.16 interesting interesting interesting | | ii | | | 0.01 os 0.08 â0.00 eo 10 Re hea at 50 o 10 2 3% 40 50 9° 0 10 2 3 40 50 °° ecurren! LSTM Bi - Directional LSTM
Figure 7: Saliency heatmap for âI hate the movie though the plot is interesting .â .
# 5 First-Derivative Saliency
In this section, we describe another strategy which is is inspired by the back-propagation strategy in vision (Erhan et al., 2009; Simonyan et al., 2013). It measures how much each input unit contributes to the ï¬nal decision, which can be approximated by ï¬rst derivatives. | 1506.01066#19 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 20 | of words, while labels could be POS tags, sentiment labels, the next word index to predict etc.) Given embeddings E for input words with the associated gold class label c, the trained model associates the pair (E, c) with a score Sc(E). The goal is to decide which units of E make the most signiï¬cant contribution to Sc(e), and thus the decision, the choice of class label c.
More formally, for a classiï¬cation model, an input E is associated with a gold-standard class label c. (Depending on the NLP task, an input could be the embedding for a word or a sequence
In the case of deep neural models, the class score Sc(e) is a highly non-linear function. We approxi- mate Sc(e) with a linear function of e by computing
Intensiï¬cation
| | Tlike it Tlike it a lot Thate it = SS | pop Thate it so much ] mec l | | | I] the movie is incredibly good
Negation
10 good 0.8 06 | | | | iT] | not good 0.4 : ETT «= 0.0 02 | I | } | not bad 0.4 7 0.6 ll like 0.8 1.0 | | | | | n't like | 1506.01066#20 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 21 | Figure 1: Visualizing intensiï¬cation and negation. Each ver- tical bar shows the value of one dimension in the ï¬nal sen- tence/phrase representation after compositions. Embeddings for phrases or sentences are attained by composing word rep- resentations from the pretrained model.
the ï¬rst-order Taylor expansion
Sc(e) â w(e)T e + b (1)
where w(e) is the derivative of Sc with respect to the embedding e.
w(e) = â(Sc) âe |e (2)
The magnitude (absolute value) of the derivative in- dicates the sensitiveness of the ï¬nal decision to the change in one particular dimension, telling us how much one speciï¬c dimension of the word embed- ding contributes to the ï¬nal decision. The saliency score is given by
S(e) = |w(e)| (3)
# 5.1 Results on Stanford Sentiment Treebank
We ï¬rst illustrate results on Stanford Treebank. We plot in Figures 5, 6 and 7 the saliency scores (the
HY 42 i hate the movie though the plot is interesting c= â â : 24 i like the movie though the plot is boring r) Absolute value of dfference | 1506.01066#21 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 22 | HY 42 i hate the movie though the plot is interesting c= â â : 24 i like the movie though the plot is boring r) Absolute value of dfference
Figure 3: Representations over time from LSTMs. Each col- umn corresponds to outputs from LSTM at each time-step (representations obtained after combining current word em- bedding with previous build embeddings). Each grid from the column corresponds to each dimension of current time-step representation. The last rows correspond to absolute differ- ences for each time step between two sequences.
absolute value of the derivative of the loss function with respect to each dimension of all word inputs) for three sentences, applying the trained model to each sentence. Each row corresponds to saliency score for the correspondent word representation with each grid representing each dimension. The examples are based on the clear sentiment indicator âhateâ that lends them all negative sentiment. | 1506.01066#22 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 23 | âI hate the movieâ All three models assign high saliency to âhateâ and dampen the inï¬uence of other tokens. LSTM offers a clearer focus on âhateâ than the standard recurrent model, but the bi-directional LSTM shows the clearest focus, at- taching almost zero emphasis on words other than âhateâ. This is presumably due to the gates struc- tures in LSTMs and Bi-LSTMs that controls infor- mation ï¬ow, making these architectures better at ï¬ltering out less relevant information. | 1506.01066#23 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 24 | jos I I os wae! |i || wate || ll | o7 the the - os | movie novie though | a I the aw | || plot jos last is oo Might meres |}! [II 1} 1 ° 10 2 o 10 2 9 40 60 °° one of the greatest | | | ll I = e . | | 9, family - riented mel ND TE > the «fantasy Jadventure movie * movies . = ever | | | ne Jose pe the \] oa film 0.54 jon make eee strong || p case 0.42 o.08 for â0.00 0.36 * the importance 0.30 os of 0.40 the 0.24 oss musicians Il | om in 0.18 | creating o% the 0.12 on motown 0.15 sound | | 0.06 a Figure 8: Variance visualization.
âI hate the movie that I saw last nightâ All three models assign the correct sentiment. The simple recurrent models again do poorly at ï¬lter- ing out irrelevant information, assigning too much salience to words unrelated to sentiment. However none of the models suffer from the gradient van- ishing problems despite this sentence being longer; the salience of âhateâ still stands out after 7-8 fol- lowing convolutional operations. | 1506.01066#24 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 26 | # 5.2 Results on Sequence-to-Sequence Autoencoder
âI hate the movie though the plot is interestingâ The simple recurrent model emphasizes only the second clause âthe plot is interestingâ, assigning no credit to the ï¬rst clause âI hate the movieâ. This might seem to be caused by a vanishing gradient, yet the model correctly classiï¬es the sentence as very negative, suggesting that it is successfully incorporating information from the ï¬rst negative clause. We separately tested the individual clause âthough the plot is interestingâ. The standard recur- rent model conï¬dently labels it as positive. Thus despite the lower saliency scores for words in the ï¬rst clause, the simple recurrent system manages to rely on that clause and downplay the information from the latter positive clauseâdespite the higher saliency scores of the later words. This illustrates a limitation of saliency visualization. ï¬rst-order derivatives donât capture all the information we would like to visualize, perhaps because they are | 1506.01066#26 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 27 | Figure 9 represents saliency heatmap for auto- encoder in terms of predicting correspondent to- ken at each time step. We compute ï¬rst-derivatives for each preceding word through back-propagation as decoding goes on. Each grid corresponds to magnitude of average saliency value for each 1000- dimensional word vector. The heatmaps give clear overview about the behavior of neural models dur- ing decoding. Observations can be summarized as follows: 1.
For each time step of word prediction, SEQ2SEQ models manage to link word to predict back to correspondent region at the inputs (automat- ically learn alignments), e.g., input region centering around token âhateâ exerts more impact when to- ken âhateâ is to be predicted, similar cases with tokens âmovieâ, âplotâ and âboringâ.
2. Neural decoding combines the previously built representation with the word predicted at the current step. As decoding proceeds, the inï¬uence | 1506.01066#27 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 28 | 2. Neural decoding combines the previously built representation with the word predicted at the current step. As decoding proceeds, the inï¬uence
- 2g 25 2 2 £28 2848 Beppe aé⬠Es a = 3 s hate the movie though the plot is boring 3 - 2 Py - gzgge284e 2 27337 8 = £ â⬠2 8 fi a = es 2 ee gee e828 22 £2£,ro3 7 2 4 BE Es 3 a though â-egeeSeeueme- â-g oo gg¢e $2 g 2 ete a ¢⬠£23 âe 2 8 â âgegegeegu4rr'-ege88 2.38327 8 ¢⬠Be 32 es gs es | FP - hate the movie though the plot i hate the movie though the boring | hate the movie though the plot boring hate the movie though the plot : hate the movie though the plot i hate the movie though the boring
3 8 - 2g 25 2 2 £28 2848 Beppe aé⬠Es a 0.14 = 3 s eS os 8 hate the movie though the plot is boring 3 3 2 - 2 Py - gzgge284e 2 27337 8 = £ â⬠2 8 0.00 | 1506.01066#28 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 29 | fi a = es 2 ee gee e828 22 £2£,ro3 7 2 4 BE Es 3 a though â-egeeSeeueme- â-g oo gg¢e $2 g 2 ete a ¢⬠£23 âe 2 8 â âgegegeegu4rr'-ege88 2.38327 8 ¢⬠Be 32 es gs es
| FP - hate the movie though the plot i hate the movie though the boring | hate the movie though the plot boring hate the movie though the plot : hate the movie though the plot i hate the movie though the boring
Figure 9: Saliency heatmap for SEQ2SEQ auto-encoder in terms of predicting correspondent token at each time step.
of the initial input on decoding (i.e., tokens in source sentences) gradually diminishes as more previously-predicted words are encoded in the vec- tor representations. Meanwhile, the inï¬uence of language model gradually dominates: when word âboringâ is to be predicted, models attach more weight to earlier predicted tokens âplotâ and âisâ but less to correspondent regions in the inputs, i.e., the word âboringâ in inputs.
# 6 Average and Variance | 1506.01066#29 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 30 | # 6 Average and Variance
For settings where word embeddings are treated as parameters to optimize from scratch (as opposed to using pre-trained embeddings), we propose a second, surprisingly easy and direct way to visualize important indicators. We ï¬rst compute the average of the word embeddings for all the words within the sentences. The measure of salience or inï¬uence for a word is its deviation from this average. The idea is that during training, models would learn to render indicators different from non-indicator words, enabling them to stand out even after many layers of computation.
Figure 8 shows a map of variance; each grid cor- responds to the value of ||e;,; â Ne Viens ew,\|? where e;,; denotes the value for 7 th dimension of word i and N denotes the number of token within the sentences.
As the ï¬gure shows, the variance-based salience measure also does a good job of emphasizing the relevant sentiment words. The model does have shortcomings: (1) it can only be used in to scenar- ios where word embeddings are parameters to learn (2) itâs clear how well the model is able to visualize local compositionality.
# 7 Conclusion | 1506.01066#30 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 31 | # 7 Conclusion
In this paper, we offer several methods to help visualize and interpret neural models, to understand how neural models are able to compose meanings, demonstrating asymmetries of negation and explain some aspects of the strong performance of LSTMs at these tasks.
Though our attempts only touch superï¬cial points in neural models, and each method has its pros and cons, together they may offer some in- sights into the behaviors of neural models in lan- guage based tasks, marking one initial step toward understanding how they achieve meaning compo- sition in natural language processing. Our future work includes using results of the visualization be used to perform error analysis, and understanding strengths limitations of different neural models.
# References
Herbert H. Clark and Eve V. Clark. 1977. Psychology and language: An introduction to psycholinguistics. Harcourt Brace Jovanovich.
Navneet Dalal and Bill Triggs. 2005. Histograms of In Com- oriented gradients for human detection. puter Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, vol- ume 1, pages 886â893. IEEE.
Jeffrey L. Elman. 1989. Representation and structure in connectionist models. Technical Report 8903,
Center for Research in Language, University of Cal- ifornia, San Diego. | 1506.01066#31 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 32 | Jeffrey L. Elman. 1989. Representation and structure in connectionist models. Technical Report 8903,
Center for Research in Language, University of Cal- ifornia, San Diego.
Dumitru Erhan, Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2009. Visualizing higher-layer fea- tures of a deep network. Dept. IRO, Universit´e de Montr´eal, Tech. Rep.
Improving vector space word representations using multilingual correlation. In Proceedings of EACL, volume 2014.
Manaal Faruqui, Yulia Tsvetkov, Dani Yogatama, Chris Dyer, and Noah Smith. 2015. Sparse overcom- plete word vector representations. arXiv preprint arXiv:1506.02004.
Tamar Fraenkel and Yaacov Schul. 2008. The mean- ing of negated adjectives. Intercultural Pragmatics, 5(4):517â540.
Alona Fyshe, Leila Wehbe, Partha P Talukdar, Brian Murphy, and Tom M Mitchell. 2015. A compo- sitional and interpretable semantic space. Proceed- ings of the NAACL-HLT, Denver, USA. | 1506.01066#32 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 33 | Ross Girshick, Jeff Donahue, Trevor Darrell, and Jiten- dra Malik. 2014. Rich feature hierarchies for accu- rate object detection and semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 580â587. IEEE.
Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Neural computation, Long short-term memory. 9(8):1735â1780.
Laurence R. Horn. 1989. A natural history of negation, volume 960. University of Chicago Press Chicago.
Yangfeng Ji and Jacob Eisenstein. 2014. Represen- tation learning for text-level discourse parsing. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics, volume 1, pages 13â24.
Andrej Karpathy, Justin Johnson, and Fei-Fei Li. 2015. Visualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078.
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classiï¬cation with deep con- volutional neural networks. In Advances in neural information processing systems, pages 1097â1105. | 1506.01066#33 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 34 | Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objec- tive function for neural conversation models. arXiv preprint arXiv:1510.03055.
David G Lowe. 2004. Distinctive image features from International journal of scale-invariant keypoints. computer vision, 60(2):91â110.
Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2014. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206.
Aravindh Mahendran and Andrea Vedaldi. 2014. Un- derstanding deep image representations by inverting them. arXiv preprint arXiv:1412.0035.
Brian Murphy, Partha Pratim Talukdar, and Tom M Mitchell. 2012. Learning effective and interpretable semantic models using non-negative sparse embed- ding. In COLING, pages 1933â1950.
Anh Nguyen, Jason Yosinski, and Jeff Clune. 2014. Deep neural networks are easily fooled: High conï¬- dence predictions for unrecognizable images. arXiv preprint arXiv:1412.1897. | 1506.01066#34 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 35 | Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. Signal Processing, IEEE Transactions on, 45(11):2673â2681.
Karen Simonyan, Andrea Vedaldi, and Andrew Zisser- man. 2013. Deep inside convolutional networks: Visualising image classiï¬cation models and saliency maps. arXiv preprint arXiv:1312.6034.
Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment In Proceedings of the conference on treebank. empirical methods in natural language processing (EMNLP), volume 1631, page 1642. Citeseer.
Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Sys- tems, pages 3104â3112.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. | 1506.01066#35 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 36 | Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85.
Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869.
Carl Vondrick, Aditya Khosla, Tomasz Malisiewicz, 2013. Hoggles: Visual- and Antonio Torralba. In Computer Vi- izing object detection features. sion (ICCV), 2013 IEEE International Conference on, pages 1â8. IEEE.
Philippe Weinzaepfel, Herv´e J´egou, and Patrick P´erez. 2011. Reconstructing an image from its local de- scriptors. In Computer Vision and Pattern Recogni- tion (CVPR), 2011 IEEE Conference on, pages 337â 344. IEEE.
Matthew D Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Com- puter VisionâECCV 2014, pages 818â833. Springer.
# Appendix | 1506.01066#36 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 37 | # Appendix
Recurrent Models A recurrent network succes- sively takes word wt at step t, combines its vector representation et with previously built hidden vec- tor htâ1 from time t â 1, calculates the resulting current embedding ht, and passes it to the next step. The embedding ht for the current time t is thus:
ht = f (W · htâ1 + V · et) (4)
where W and V denote compositional matrices. If Ns denote the length of the sequence, hNs repre- sents the whole sequence S. hNs is used as input a softmax function for classiï¬cation tasks.
Multi-layer Recurrent Models Multi-layer re- current models extend one-layer recurrent structure by operation on a deep neural architecture that en- ables more expressivity and ï¬exibly. The model associates each time step for each layer with a hid- den representation hl,t, where l â [1, L] denotes the index of layer and t denote the index of time step. hl,t is given by:
ht,l = f (W · htâ1,l + V · ht,lâ1) (5)
where ht,0 = et, which is the original word embed- ding input at current time step. | 1506.01066#37 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 38 | where ht,0 = et, which is the original word embed- ding input at current time step.
Long-short Term Memory LSTM model, ï¬rst proposed in (Hochreiter and Schmidhuber, 1997), maps an input sequence to a ï¬xed-sized vector by sequentially convoluting the current representation with the output representation of the previous step. LSTM associates each time epoch with an input, control and memory gate, and tries to minimize it, ft and the impact of unrelated information. ot denote to gate states at time t. ht denotes the hidden vector outputted from LSTM model at time t and et denotes the word embedding input at time t. We have
it = Ï(Wi · et + Vi · htâ1) ft = Ï(Wf · et + Vf · htâ1) ot = Ï(Wo · et + Vo · htâ1) lt = tanh(Wl · et + Vl · htâ1) ct = ft · ctâ1 + it à lt ht = ot · mt (6)
where Ï denotes the sigmoid function. it, ft and ot are scalars within the range of [0,1]. Ã denotes pairwise dot. | 1506.01066#38 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.01066 | 39 | where Ï denotes the sigmoid function. it, ft and ot are scalars within the range of [0,1]. Ã denotes pairwise dot.
A multi-layer LSTM models works in the same way as multi-layer recurrent models by enable multi-layerâs compositions.
Bidirectional Models (Schuster and Paliwal, 1997) add bidirectionality to the recurrent frame- work where embeddings for each time are calcu- lated both forwardly and backwardly:
t = f (W â · hâ hâ t = f (W â · hâ hâ tâ1 + V â · et) t+1 + V â · et) (7)
Normally, bidirectional models feed the concate- nation vector calculated from both directions [eâ ] to the classiï¬er. Bidirectional models can be similarly extended to both multi-layer neu- ral model and LSTM version. | 1506.01066#39 | Visualizing and Understanding Neural Models in NLP | While neural networks have been successfully applied to many NLP tasks the
resulting vector-based models are very difficult to interpret. For example it's
not clear how they achieve {\em compositionality}, building sentence meaning
from the meanings of words and phrases. In this paper we describe four
strategies for visualizing compositionality in neural models for NLP, inspired
by similar work in computer vision. We first plot unit values to visualize
compositionality of negation, intensification, and concessive clauses, allow us
to see well-known markedness asymmetries in negation. We then introduce three
simple and straightforward methods for visualizing a unit's {\em salience}, the
amount it contributes to the final composed meaning: (1) gradient
back-propagation, (2) the variance of a token from the average word node, (3)
LSTM-style gates that measure information flow. We test our methods on
sentiment using simple recurrent nets and LSTMs. Our general-purpose methods
may have wide applications for understanding compositionality and other
semantic properties of deep networks , and also shed light on why LSTMs
outperform simple recurrent nets, | http://arxiv.org/pdf/1506.01066 | Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky | cs.CL | null | null | cs.CL | 20150602 | 20160108 | [
{
"id": "1510.03055"
},
{
"id": "1506.02078"
},
{
"id": "1506.02004"
},
{
"id": "1506.05869"
}
] |
1506.02488 | 0 | arXiv:1506.02488v1_[math.CA] 2015
5 1 0 2
y a M 4 2 ] A C . h t a m [
1 v 8 8 4 2 0 . 6 0 5 1 : v i X r a
# On the Fuzzy Stability of an Aï¬ne Functional Equation
# Md. Nasiruzzaman
Department of Mathematics, Aligarh Muslim University, Aligarh 202002, India Email: [email protected]
Abstract: In this paper, we obtain the general solution of the following functional equation
f (3x + y + z) + f (x + 3y + z) + f (x + y + 3z) + f (x) + f (y) + f (z) = 6f (x + y + z).
We establish the Hyers-Ulam-Rassias stability of the above functional equation in the fuzzy normed spaces. Further we show the above functional equation is stable in the sense of Hyers and Ulam in fuzzy normed spaces. 1. Introduction | 1506.02488#0 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 1 | In modelling applied problems only partial informations may be known (or) there may be a degree of uncertainty in the parameters used in the model or some measurements may be imprecise. Due to such features, we are tempted to consider the study of functional equations in the fuzzy setting. For the last 40 years, fuzzy theory has become very active area of research and a lot of development has been made in the theory of fuzzy sets [1] to ï¬nd the fuzzy analogues of the classical set theory. This branch ï¬nds a wide range of applications in the ï¬eld of science and engineering. A.K. Katsaras [2] introduced an idea of fuzzy norm on a linear space in 1984, in the same year Cpmgxin Wu and Jinxuan Fang [3] introduced a notion of fuzzy normed space to give a generalization of the Kolmogoroï¬ normalized theorem for fuzzy topological linear spaces. In 1991, R. Biswas [4] deï¬ned and studied fuzzy inner product spaces in linear space. In 1992, C. Felbin [5] introduced an alternative deï¬nition of a | 1506.02488#1 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 2 | and studied fuzzy inner product spaces in linear space. In 1992, C. Felbin [5] introduced an alternative deï¬nition of a fuzzy norm on a linear topological structure of a fuzzy normed linear spaces. In 2003, T. Bag and S.K. Samanta [6] modiï¬ed the deï¬nition of S.C. Cheng and J.N. Mordeson [7] by removing a regular condition. In 1940, Ulam [8] raised a question concerning the stability of group homomorphism as follows: Let G1 be a group and G2 a metric group with the metric d(., .). Given ε > 0, does there exists a δ > 0 such that if a function f : G1 â G2 satisï¬es the inequality | 1506.02488#2 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 3 | d(f (xy), f (x)f (y)) < δ for all x, y â G1,
then there exists a homomorphism h : G1 â G2 with
d(f (x), H(x)) < ε for all x â G1?
1
The concept of stability for a functional equation arises when we replace the functional equation by an inequality which acts as a perturbation of the equation. In 1941, the case of approximately additive mappings was solved by Hyers [9] under the assumption that G2 is a Banach space. In 1978, a generalized version of the theorem of Hyers for approximately linear mapping was given by Th.M. Rassias [10]. He proved that for a mapping f : E1 â E2 such that f (tx) is continuous in t â R and for each ï¬xed x â E1 assume that there exist a constant ε > 0 and p â [0, 1) with
k f (x + y) â f (x) â f (y) k6 ε(k x kp + k y kp) (1.1)
x, y â E1, then there exist a unique R-Linear mapping T : E1 â E2 such that | 1506.02488#3 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 5 | The result of Rassias has inï¬uenced the development of what is now called the Hyers- In 1994, a generalization of Ulam-Rassias stability theory for functional equations. Rassias theorem was obtained by Gavruta [11] by replacing the bound ε(kxkp + kykp) by a general control function Ï(x, y). During the last decades, the stability problems of several functional equations have been extensively investigated by a number of authors (c.f. [12], [13], [14], [17] and [20]â[26] etc.). In 1982-1989, J.M.Rassias [15, 16] replaced the sum appeared in right hand side of the equation (1.1) by the product of powers of norms. In fact, he proved the following theorem.
Theorem 1.1 Let f : E1 â E2 be a mapping from a normed vector space E1 into Banach space E2 subject to the inequality
k f (x + y) â f (x) â f (y) k6 ε(k x kpk y kp) (1.3)
for all x, y â E1, where ε and p are constants with ε > 0 and 0 6 p < 1 limit | 1506.02488#5 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 6 | f (2nx) 2n (1.4)
L(x) = lim nââ exists for all x â E1, and L : E1 â E2 is the unique additive mapping which satisï¬es
k f (x) â L(x) k6 ε 2 â 22p k x k2p (1.5)
for all x â E1. If p > 1 2 the inequality (1.3) holds for x, y â E1 and the limit
A(x) = lim nââ 2nf x 2n (1.6)
exists for all x â E1 and A : E1 â E2 is the unique additive mapping which satisï¬es
k f (x) â A(x) k6 ε 22p â 2 k x k2p (âx â E1) (1.7) | 1506.02488#6 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 7 | 2
(1.2)
Recently, Cadariu et al [19] studied the generalized Hyers-Ulam stability by using the direct method as well as the ï¬xed point method for the aï¬ne type functional equation
f (2x + y) + f (x + 2y) + f (x) + f (y) = 4f (x + y), for all x, y â G. (1.8)
In the present paper, we obtain the general solution of the following functional equation
f (3x + y + z) + f (x + 3y + z) + f (x + y + 3z) + f (x) + f (y) + f (z) = 6f (x + y + z). (1.9)
where f : X â Y , X and Y are normed spaces. Then, we establish the fuzzy Hyers- Ulam-Rassias stability of the above functional equation.
# 2. Preliminary Notes
Before we proceed to the main results, we will introduce a deï¬nition and some ex- amples to illustrate the idea of fuzzy norm. | 1506.02488#7 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 8 | # 2. Preliminary Notes
Before we proceed to the main results, we will introduce a deï¬nition and some ex- amples to illustrate the idea of fuzzy norm.
Deï¬nition 2.1 Let X be a real linear space. A mapping N : X à R â [0, 1] (the so-called fuzzy subset) is said to be a f uzzy norm on X if for all x, y â X and all s, t â R, (N1) N(x, t) = 0 for t 6 0; (N2) x = 0 if and only if N(x, t) = 1 for all t > 0; (N3) N(cx, t) = N(x, t/ | c |) if c 6= 0; (N4) N(x + y, t + s) > min{N(x, t), N(y, s)}; (N5) N(x, .) is a non-decreasing function on R and lim tââ (N6) for x 6= 0, N(x, .) is continuous on R. The pair (X, N) is called a f uzzy normed linear space. One may regard N(x, t) as the truth value of the statement that the norm of x is less than or equal to the real number t. | 1506.02488#8 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 11 | 3
is a fuzzy norm on X.
Deï¬nition 2.4 Let (X, N) be a fuzzy normed linear space. A sequence {xn} in X N(xn â x, t) = 1 for all is said to be convergent if there exists an x â X such that lim nââ t > 0. In this case, x is called the limit of the sequence {xn} and we denote it by
N â lim nââ N(xn â x, t) = x.
Deï¬nition 2.5 Let (X, N) be a fuzzy normed linear space. A sequence {xn} in X is said to be Cauchy if for each ε > 0 and each δ > 0 there exists an n0 â N such that
N(xm â xn, δ) > 1 â ε (m, n > n0).
It is well known that every convergent sequence in a fuzzy normed linear space is Cauchy. If each Cauchy sequence is convergent, then the fuzzy norm is said to be complete and the fuzzy normed vector space is called a fuzzy Banach space. | 1506.02488#11 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 12 | The remaining part of the paper is organized as follows: We discuss the general solution of functional equation (1.9) in Section 3. Section 4 is devoted to investigate the non- uniform version of stability of functional equation (1.9) in fuzzy normed spaces and in section (5), we show under suitable conditions that in fuzzy normed spaces functional equation (1.9) is stable uniformly. Now we proceed to ï¬nd the general solution of the functional equation (1.9) 3. Solution of the Functional Equation (1.9) Theorem 3.1 A mapping f : X â Y , X and Y are normed spaces, is a solution of the functional equation (1.9) if and only if it is an aï¬ne mapping (i.e., it is the sum between a constant and an additive function). Proof. We can easily seen that any aï¬ne function f is a solution of the equation (1.9). Conversely, we have two cases: Case 1 : f (0) = 0. If we take y = z = âx in (1.9), we obtain
2f (x) + 2f (â3x) + 2f (âx) = 6f (âx), for all x â X. (3.1)
Again replacing putting y = z = 0 in (1.9), we obtain | 1506.02488#12 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 13 | Again replacing putting y = z = 0 in (1.9), we obtain
f (3x) = 3f (x), for all x â X. (3.2)
By (3.1) and (3.2), we have f (âx) = âf (x), for all x â X. It results that f is an odd mapping. Replace z by ây in (1.9), we get
f (x + 2y) + f (x â 2y) = 2f (x) (3.3)
4
If we replace x and y by u+v
# 2 and uâv
4 , respectively, in (3.3) and using (3.2), we have
utvu 2 u-v 4 If we replace x and y by and , respectively, in (3.3) and using (3.2), we have | 1506.02488#13 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 14 | f (u + v) = f (u) + f (v), for all u, v â X.
So, f is an additive mapping. Case 2 : General case. Let us consider the function g(x) := f (x) â f (0). It is clear that g(0) = 0 and f (x) = g(x) + f (0). Replacing f by g in (1.9), it results
g(3x + y + z) + g(x + 3y + z) + g(x + y + 3z) + g(x) + g(y) + g(z) = 6g(x + y + z).
for all x, y, z â X. Taking in account that g(0) = 0, from Case 1, we obtain that g is an additive mapping, hence f (x) = g(x) + f (0) is an aï¬ne function. This completes the proof.
For a given mapping f : X â Y , let us denote
Df (x, y, z) = f (3x + y + z) + f (x + 3y + z) + f (x + y + 3z) + f (x) + f (y) + f (z) â 6f (x + y + z) | 1506.02488#14 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 15 | # 4. Fuzzy Hyers-Ulam-Rassias Stability: non-uniform version
Theorem 4.1 Let X be a linear space and (Z, N â²) a fuzzy normed space. Let Ï : X 3 â Z be a mapping such that for some α 6= 0 with 0 < α < 3
N â²(Ï(3x, 0, 0), t) > N â²(αÏ(x, 0, 0), t) (4.1)
for all x â X, t > 0 and
lim nââ N â²(Ï(3nx, 3ny, 3nz), 3nt) = 1,
for all x, y, z â X and all t > 0. Suppose that (Y, N) be a fuzzy Banach space and an odd mapping f : X â Y satisï¬es the inequality
N(Df (x, y, z), t) > N â²(Ï(x, y, z), t) (4.2)
for all x, y, z â X and all t > 0. Then the limit
A(x) = N â lim nââ f (3nx) 3n | 1506.02488#15 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 16 | exists for all x â X and the mapping A : X â Y is the unique aï¬ne mapping satisfying
N(f (x) â A(x) â f (0), t) > N â²(Ï(x, 0, 0), (3 â α)t) (4.3)
5
for all x â X and all t > 0. Proof. Letting y = z = 0 in (4.2), we get
N(f (3x) â 3f (x) + 2f (0), t) > N â²(Ï(x, 0, 0), t) (4.4)
for all x â X and all t > 0. If we deï¬ne the mapping g : X â Y such that g(x) := f (x) â f (0) for all x â X. Indeed g(0) = 0. Then (4.4) implies
N(g(3x) â 3g(x), t) > N â²(Ï(x, 0, 0), t)
Replacing x by 3nx in the last inequality, we obtain | 1506.02488#16 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 18 | # for alla ⬠X oore)
nâ1
g(3j+1x) 3j+1 â g(3j x) 3n â g(x) = 3j and (4.5)
# j=0 P
# that
n-1 . n-1 . g(3"x) alt g(3!*1x) â g(34x) alt v( ~ g(x), >) 3741) =N ae 3741 37° 37H) j=0 j-0 n-1 ; ; g(3ittxr) â g(3âx) alt > min Ute (4 arn 31 =} > N"(:p(@, 0,0), t). ce ceeeeeeeeeeseeseeseeeeeeeeeeeeeens (4.6)
for all x â X and all t > 0. Replacing x by 3mx in (4.6), we get
n-1 ; g Brtmy wero alt (> sta) > N' (ole, 0.0), 3rtm j=0 t qm
# j=0 X
and so
: g(r) g(3â¢ax) "LA* alt v( = a , Dd, azar) ) > N'(v(, 0,0), 1) j=m | 1506.02488#18 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 19 | # j=m X > N â²
g(3"t"a g(3"x t (> â - d 2 N' (20,0), ae â (4.7) 3741
# j=m P
6
â
( α 3 )j < â, the Cauchy for all x â X, t > 0 and m, n > 0. Since 0 < α < 3 and
# j=0 P
criterion for convergence and (N5) imply that { g(3nx) 3n } is a Cauchy sequence in (Y, N). Since (Y, N) is a fuzzy Banach space, this sequence converges to some point A(x) â Y . f (3nx) Hence, we can deï¬ne a mapping A : X â Y by A(x) = N â lim 3n nââ g(3nx) 3n = N â lim nââ for all x â X, namely. Since f is odd, A is odd. Letting m = 0 in (4.7), we get
(oe _ ule).t)> Nâ (ote 0,0), a) | 1506.02488#19 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 20 | (oe _ ule).t)> Nâ (ote 0,0), a)
# j=0 P
Taking the limit as n â â and using (N6), we get
N(A(a) â g(x), t) > W'(e(0.0,0), = t ) ad 3741 j=0 = N'(y(2, 0,0), (3 â a)t) N(f(2) ~~ A(z) ~~ f(0), t) 2 N'(y(@, 0, 0), (3 ~~ a)t)
for all x â X and all t > 0. Now we claim that A is aï¬ne. Replacing x, y, z by 3nx, 3ny, 3nz, respectively, in (4.2), we get
1 N (FOr 3ây, 3"), â) > N'(y(3"2, 3ây, 3"z), 3"t) | 1506.02488#20 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 21 | # for all x,y,z ⬠X and allt > 0. Since
lim nââ N â²(Ï(3nx, 3ny, 3nz), 3nt) = 1,
A satisï¬es the functional equation (1.9). Hence A is aï¬ne. To prove the uniqueness of A, let Aâ² : X â Y be another aï¬ne mapping satisfying (4.3). Fix x â X. Clearly A(3nx) = 3nA(x) and Aâ²(3nx) = 3nAâ²(x) for all x â X and all n â N. It follows from (4.3) that
N(A(2) â Aâ(2),t) = (= AiS"a) â) 3" 3â > mind N A(3"r) â g(3 t) t _N g(3"r) = AN(3 a) t 3â 3" 2 3â 3â 2 3"(3 â a)t > w'(o(3%2.0.0), ( ; a) ) 3"(3 âa)t > N'| v(x, 0,0), ââââ (02.0.0), ar )
7 | 1506.02488#21 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 22 | 7
3n(3âα)
for all x â X and all t > 0. Since lim nââ 2αn = â, we obtain
lim nââ N â² Ï(x, 0, 0), 3n(3 â α)t 2αn = 1.
Thus N(A(x) â Aâ²(x), t) = 1 for all x â X and all t > 0, and so A(x) = Aâ²(x). This completes the proof. 5. Fuzzy Hyers-Ulam-Rassias Stability: uniform version
Theorem 5.1 Let X be a linear space and (Y, N) be a fuzzy Banach space. Let Ï : X 3 â [0, â) be a function such that
â
ËÏ(x, y, z) = 1 3n Ï(3nx, 3ny, 3nz) < â (5.1) | 1506.02488#22 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 23 | # n=0 X
for all x, y, z â X. Let f : X â Y be a uniformly approximately aï¬ne mapping with respect to Ï in the sense that
lim tââ N(Df (x, y, z), tÏ(x, y, z)) = 1 (5.2)
uniformly on X 3. Then
A(x) := N â lim nââ f (3nx) 3n
for all x â X exists and deï¬nes an aï¬ne mapping A : X â Y such that if for some α > 0, δ > 0
N(Df (x, y, z), δÏ(x, y, z)) > α (5.3)
for all x, y, z â X, then
N(f (x) â A(x) â f (0), δ 3 ËÏ(0, 0, , x)) > α | 1506.02488#23 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 24 | for all x â X. Proof. Let ε > 0, by (5.2), we can ï¬nd t0 > 0 such that
N(Df (x, y, z), tÏ(x, y, z)) > 1 â ε (5.4)
for all x, y, z â X and all t > t0. Deï¬ne g : X â Y such that g(x) := f (x) â f (0). It is clear that g(0) = 0 and f (x) = g(x) + f (0). Now (5.4) implies that
N(Dg(x, y, z), tÏ(x, y, z)) > 1 â ε (5.5)
for all x, y, z â X and all t > t0. By induction on n, we will show that
n-1 x(a" = 3"g(x),t 5° 3-1 0(0, 0, 3"0)) >l-e« (5.6) m=0
# m=0 X
8 | 1506.02488#24 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 25 | # m=0 X
8
for all x â X, all t > t0 and n â N. Putting x = y = 0 and z = x in (5.5), we get (5.6) for n = 1. Let (5.6) holds for some positive integers n. Then
n N(g(3n+1x) â 3n+1g(x), t 3nâmÏ(0, 0, 3mx))
# m=0 X
> min{N(g(3n+1x) â 3g(3nx), tÏ(0, 0, 3nx)), n N(3g(3nx) â 3n+1g(x), t 3(nâm)Ï(0, 0, 3mx))} m=0 X > min{1 â ε, 1 â ε} = 1 â ε.
This completes the induction argument. Let t = t0 and put n = p. Then by replacing x with 3nx in (5.6), we obtain
pâ1 N(g(3n+px) â 3pg(3nx), t0 3pâmâ1Ï(0, 0, 3n+mx)) > 1 â ε
# m=0 X | 1506.02488#25 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 27 | # m=0 X
for all integers n > 0, p > 0. The convergence of (5.1) and the equation
pâ1 3â(n+m+1)Ï(0, 0, 3n+mx)) = 1 k n+pâ1 3âmÏ(0, 0, 3mx)
# m=0 X
# m=n X
guarantees that for given δ > 0, there exists n0 â N such that
t0 3 n+pâ1 3âmÏ(0, 0, 3mx) < δ
# m=n X
for all n > n0 and p > 0. It follows from (5.7) that
n+p, Now n+p, x(& zt) 9(3 e) ns n(n a . to x3 (n+m-+1) y(0, 0, grtmy, )> lâe 3n+p 3n 3n+p m=0 (5.8) | 1506.02488#27 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 28 | m=0 (5.8) for each n > no and all p > 0. Hence {oe 3 Hy is a Cauchy sequence in Y. Since Y isa fuzzy Banach space, this sequence converges to ome A(x) ⬠Y. Hence we can define a mapping A: X + Y by A(a) :-= N â jim 4 § = 7 â Nâ lim ae for alla ⬠X nâ0o namely. For each t > 0 and x ⬠X
lim nââ N A(x) â f (3nx) 3n , t = 1.
9
Now, let x, y, z â X. Fix t > 0 and 0 < ε < 1. Since lim nââ is some n1 > n0 such that 3n Ï(3nx, 3ny, 3nz) = 0, there 1 | 1506.02488#28 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 30 | The ï¬rst 7 terms on the right hand side of the above inequality tend to 1 as n â â and the last term is greater than N(Df (3nx, 3ny, 3nz), t0Ï(3nx, 3ny, 3nz)), i.e., by (5.4), greater than or equal to 1âε. Thus N(DA(x, y, z), t) > 1âε for all t > 0 and 0 < ε < 1. It follows that N(DA(x, y, z), t) = 1 for all t > 0 and by (N2), we have DA(x, y, z) = 1, i.e.,
A(3x + y + z) + A(x + 3y + z) + A(x + y + 3z) + A(x) + A(y) + A(z) = 6A(x + y + z) To end the proof, let for some positive α and δ, (5.3) holds. Let
nâ1 Ïn(x, y, z) := 3â(m+1)Ï(3mx, 3my, 3mz)
# m=0 X | 1506.02488#30 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 32 | Combining (5.8), (5.9) and the fact that
lim nââ N g(3nx) 3n â A(x), s = lim nââ N f (3nx) 3n â A(x), s = 1,
10
5) f
we obtain that
N(g(x) â A(x), δÏn(0, 0, x) + s) > α
for large enough n. By the (upper semi) continuity of real function N(g(x) â A(x), .), we obtain that
N g(x) â A(x), δ 3 ËÏ(0, 0, x) + s > α.
Taking the limit as s â 0, we conclude that
N g(x) â A(x), δ 3 ËÏ(0, 0, x) > α
N f (x) â A(x) â f (0), δ 3 ËÏ(0, 0, x) > α. | 1506.02488#32 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 33 | This completes the proof.
Theorem 5.2 Let X be a linear space and (Y, N) be a fuzzy Banach space. Let Ï : X 3 â [0, â) be a function satisfying (5.1). Let f : X â Y be a uniformly ap- proximately aï¬ne mapping with respect to Ï. Then there is a unique aï¬ne mapping A : X â Y such that
lim tââ N(f (x) â A(x) â f (0), t ËÏ(0, 0, x)) = 1 (5.11)
uniformly on X. Proof. The existence of uniform limit (5.11) immediately follows from Theorem 4.5. It remains to prove the uniqueness assertion. Let Aâ² be another aï¬ne mapping satisfying (5.11). Fix c > 0. Given ε > 0, by (5.11) for A and Aâ², we can ï¬nd some t0 > 0 such that
N(g(x) â A(x), N(g(x) â Aâ²(x), t 2 t 2 ËÏ(0, 0, x)) > 1 â ε, ËÏ(0, 0, x)) > 1 â ε | 1506.02488#33 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 36 | > 1 â ε.
It follows that N(Aâ²(x) â A(x), c) = 1, for all c > 0. Thus A(x) = Aâ²(x) for all x â X. This completes the proof.
Considering the control function Ï(x, y, z) = ε(kxkp + kykp + kzkp) for some ε > 0, we obtain the following:
Corollary 5.3 Let X be a normed linear space, let (Y, N) be a fuzzy Banach space, let ε > 0, and let 0 6 p < 1. Suppose that f : X â Y is a function such that
lim nââ N(Df (x, y, z), tε(kxkp + kykp + kzkp)) = 1
uniformly on X 3. Then there is a unique aï¬ne mapping A : X â Y such that
lim tââ N f (x) â A(x) â f (0), εt31âpkxkp 31âp â 1 = 1 | 1506.02488#36 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 37 | uniformly on X.
12
# References
[1] L.A. Zadeh, Fuzzy sets, Information and Control 8 (1965) 338-353.
[2] A.K. Katsaras, Fuzzy topological vector spaces II, Fuzzy Sets Syst., 12(1984), 143-154.
[3] C. Wu and J. Fang, Fuzzy generalization of Kolomogoroï¬s theorem, J.Harbin Inst. Technol., 1(1984), 1-7.
[4] R. Biswas, Fuzzy inner product space and fuzzy norm functions, Inform. Sci., 53(1991), 185-190.
[5] C. Felbin, Finite dimensional fuzzy normed space, Fuzzy Sets Syst., 48(1992), 239-248.
[6] T. Bag and S.K. Samanta, Finite dimensional fuzzy normed linear spaces, J. Fuzzy Math., 11:3(2003), 687-705.
[7] S.C. Cheng and J.N. Mordeson, Fuzzy linear operator and fuzzy normed linear spaces, Bull. Calcuta Math. Soc., 86(1994), 429-436. | 1506.02488#37 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 38 | [8] S.M. Ulam, Problems in Modern Mathematics, Science ed., John Wiley & Sons: New York; 1940 (Chapter VI, Some Questions in Analysis: Section 1, Stability).
[9] D.H. Hyers, On the stability of the linear functional equation, Proc. Natl. Acad. Sci., 27(1941) 222â224.
[10] Th. M. Rassias, On the stability of the linear mapping in Banach spaces, Proc. Amer. Math. Soc., 72(1978), 297-300.
[11] P. Gavruta, A generalization of the HyersUlamRassias stability of approximately additive mappings, J. Math. Anal. Appl. 184 (1994) 431436.
[12] S. Czerwik, Functional Equations and Inequalities in Several Variables, World Scientiï¬c Publishing Co., Inc., River Edge, NJ, 2002.
[13] D.H. Hyers, G. Isac and Th.M. Rassias, Stability of Functional Equations in Sev- eral Variables, Birkh¨auser, Basel; 1998.
[14] P. Kannappan, Functional Equations and Inequalities with Applications, Springer, 2009. | 1506.02488#38 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 39 | [14] P. Kannappan, Functional Equations and Inequalities with Applications, Springer, 2009.
[15] J.M. Rassias, On approximation of approximately linear mappings by linear map- ping, J.Funct. Anal., 46:1(1982), 126-130.
[16] J.M. Rassias, On approximation of approximately linear mappings by linear map- pings, Bull.Sci. Math. (2), 108:4(1984), 445-446.
13
[17] M. Mursaleen, Khursheed J. Ansari, Stability results in intuitionistic fuzzy normed spaces for a cubic functional equation, Appl. Math. Inf. Sci. 7, No. 5, 1685-1692 (2013).
[18] S. Javadi, J. M. Rassias, Stability of General Cubic Mapping in Fuzzy Normed Spaces, An. S¸t. Univ. Ovidius Constant¸a, Vol. 20(1), 2012, 129-150.
[19] L.Cadariu, L. Gavruta, P. Gavruta, On the stability of an aï¬ne functional equa- tion, J. Nonlinear Sci. Appl, 6(2013) 60-67. | 1506.02488#39 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1506.02488 | 40 | [20] S. A. Mohiuddine, Stability of Jensen functional equation in intuitionistic fuzzy normed space, Chaos, Solitons & Fract., 42 (2009) 2989â2996.
[21] S. A. Mohiuddine and M.A. Alghamdi, Stability of functional equation obtained through a ï¬xed-point alternative in intuitionistic fuzzy normed spaces, Adv. Dif- ference Equ. 2012, 2012:141.
[22] S. A. Mohiuddine and H. S¸evli, Stability of Pexiderized quadratic functional equa- tion in intuitionistic fuzzy normed space, J. Comput. Appl. Math., 235 (2011) 2137â2146.
[23] M. Mursaleen and K. J. Ansari, Stability results in intuitionistic fuzzy normed spaces for a cubic functional equation, Appl. Math. Inf. Sci., 7(5) (2013) 1685â 1692.
[24] M. Mursaleen and S. A. Mohiuddine, On stability of a cubic functional equation in intuitionistic fuzzy normed spaces, Chaos, Solitons Fract. 42 (2009) 2997â3005. | 1506.02488#40 | On the Fuzzy Stability of an Affine Functional Equation | In this paper, we obtain the general solution of the following functional
equation f(3x + y + z) + f(x + 3y + z) + f(x + y + 3z) + f(x) + f(y) + f(z) =
6f(x + y + z): We establish the Hyers-Ulam-Rassias stability of the above
functional equation in the fuzzy normed spaces. Further we show the above
functional equation is stable in the sense of Hyers and Ulam in fuzzy normed
spaces. | http://arxiv.org/pdf/1506.02488 | Md. Nasiruzzaman | math.CA | 14 pages | null | math.CA | 20150524 | 20150524 | [
{
"id": "1506.02488"
}
] |
1505.05008 | 1 | Most state-of-the-art named entity recog- nition (NER) systems rely on handcrafted features and on the output of other NLP tasks such as part-of-speech (POS) tag- ging and text chunking. In this work we propose a language-independent NER sys- tem that uses automatically learned fea- tures only. Our approach is based on the CharWNN deep neural network, which uses word-level and character-level rep- resentations (embeddings) to perform se- quential classiï¬cation. We perform an ex- tensive number of experiments using two annotated corpora in two different lan- guages: HAREM I corpus, which contains texts in Portuguese; and the SPA CoNLL- 2002 corpus, which contains texts in Span- ish. Our experimental results shade light on the contribution of neural character em- beddings for NER. Moreover, we demon- strate that the same neural network which has been successfully applied to POS tag- ging can also achieve state-of-the-art re- sults for language-independet NER, us- ing the same hyperparameters, and with- For the out any handcrafted features. HAREM I corpus, CharWNN outperforms | 1505.05008#1 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 3 | # Introduction
Named entity recognition is a natural language processing (NLP) task that consists of ï¬nding names in a text and classifying them among sev- eral predeï¬ned categories of interest such as per- son, organization, location and time. Although machine learning based systems have been the
predominant approach to achieve state-of-the-art results for NER, most of these NER systems rely on the use of costly handcrafted features and on the output of other NLP tasks (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003; Doddington et al., 2004; Finkel et al., 2005; Mi- lidi´u et al., 2007). On the other hand, some recent work on NER have used deep learning strategies which minimize the need of these costly features (Chen et al., 2010; Collobert et al., 2011; Passos et al., 2014; Tang et al., 2014). However, as far as we know, there are still no work on deep learning approaches for NER that use character-level em- beddings. | 1505.05008#3 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 4 | language- independent NER using CharWNN, a recently proposed deep neural network (DNN) architecture that jointly uses word-level and character-level embeddings to perform sequential classiï¬cation (dos Santos and Zadrozny, 2014). CharWNN em- ploys a convolutional layer that allows effective character-level feature extraction from words of any size. This approach has proven to be very effective for language-independent POS tagging (dos Santos and Zadrozny, 2014). | 1505.05008#4 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 5 | We perform an extensive number of experi- ments using two annotated corpora: HAREM I corpus, which contains texts in Portuguese; and the SPA CoNLL-2002, which contains texts in Spanish. In our experiments, we compare the performance of the joint and individual use of character-level and word-level embeddings. We provide information on the impact of unsupervised pre-training of word embeddings in the perfor- mance of our proposed NER approach. Our exper- imental results evidence that CharWNN is effec- tive and robust for Portuguese and Spanish NER. Using the same CharWNN conï¬guration used by dos Santos and Zadrozny (2014) for POS Tagging, we achieve state-of-the-art results for both cor- pora. For the HAREM I corpus, CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score for the total scenario (ten NE classes), and by 7.2 points in the F1 for the se- lective scenario (ï¬ve NE classes). This is a re- markable result for a NER system that uses only automatically learned features. | 1505.05008#5 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 6 | This work is organized as follows. In Section 2, we brieï¬y describe the CharWNN architecture. Section 3 details our experimental setup and Sec- tion 4 discuss our experimental results. Section 6 presents our ï¬nal remarks.
# 2 CharWNN
CharWNN extends Collobert et al.âs (2011) neu- ral network architecture for sequential classiï¬ca- tion by adding a convolutional layer to extract character-level representations (dos Santos and Zadrozny, 2014). Given a sentence, the network gives for each word a score for each class (tag) Ï â T . As depicted in Figure 1, in order to score a word, the network takes as input a ï¬xed-sized window of words centralized in the target word. The input is passed through a sequence of layers where features with increasing levels of complex- ity are extracted. The output for the whole sen- tence is then processed using the Viterbi algorithm (Viterbi, 1967) to perform structured prediction. For a detailed description of the CharWNN neu- ral network we refer the reader to (dos Santos and Zadrozny, 2014).
# 2.1 Word- and Character-level Embeddings | 1505.05008#6 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 7 | # 2.1 Word- and Character-level Embeddings
As illustrated in Figure 1, the ï¬rst layer of the network transforms words into real-valued fea- ture vectors (embeddings). These embeddings are meant to capture morphological, syntactic and se- mantic information about the words. We use a ï¬xed-sized word vocabulary V wrd, and we con- sider that words are composed of characters from a ï¬xed-sized character vocabulary V chr. Given a sentence consisting of N words {w1, w2, ..., wN }, every word wn is converted into a vector un = [rwrd; rwch], which is composed of two sub- vectors: the word-level embedding rwrd â Rdwrd and the character-level embedding rwch â Rclu of wn. While word-level embeddings capture syntac- tic and semantic information, character-level em- beddings capture morphological and shape infor- mation.
Word-level embeddings are encoded by col- umn vectors in an embedding matrix W wrd â | 1505.05008#7 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 8 | Word-level embeddings are encoded by col- umn vectors in an embedding matrix W wrd â
RdwrdÃ|V wrd|, and retrieving the embedding of a particular word consists in a simple matrix-vector multiplication. The matrix W wrd is a parameter to be learned, and the size of the word-level em- bedding dwrd is a hyperparameter to be set by the user.
The character-level embedding of each word is computed using a convolutional layer (Waibel et al., 1989; Lecun et al., 1998). In Figure 1, we il- lustrate the construction of the character-level em- bedding for the word Bennett, but the same pro- cess is used to construct the character-level em- bedding of each word in the input. The convo- lutional layer ï¬rst produces local features around each character of the word, and then combines them using a max operation to create a ï¬xed-sized character-level embedding of the word. | 1505.05008#8 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 9 | Given a word w composed of M characters {c1,c2,...,car}, we first transform each charac- ter Cm into a character embedding r¢?â. Character embeddings are encoded by column vectors in the embedding matrix We" ⬠Re XIV" Given a character c, its embedding r°ââ is obtained by the matrix-vector product: rer = werrye, where v° is a vector of size |verr| which has value 1 at in- dex c and zero in all other positions. The input for the convolutional layer is the sequence of charac- ter embeddings {rf ,r$"", ...,r9h"}. The convolutional layer applies a matrix- vector operation to each window of size kk" of successive windows in the sequence {rohr rsh"... 77}. Let us define the vector zm ⬠Re""k"" as the concatenation of the character embedding m, its (k°â" â 1)/2 left neighbors, and its (k°ââ â 1) /2 right neighbors:
chr T Zn = por r m mâ(kehr 1) (2779 m4 (Kehr 1) /2 | 1505.05008#9 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 10 | chr T Zn = por r m mâ(kehr 1) (2779 m4 (Kehr 1) /2
The convolutional layer computes the j-th element of the vector rwch, which is the character-level em- bedding of w, as follows:
[re]; = imax [Weem + e), (1)
where W 0 â RcluÃdchrkchr is the weight matrix of the convolutional layer. The same matrix is used to extract local features around each character win- dow of the given word. Using the max over all character windows of the word, we extract a ï¬xed- sized feature vector for the word.
Matrices W chr and W 0, and vector b0 are pa- rameters to be learned. The size of the character
Lvs) s 5 > > = @ o = 0 D 35°F f= @® Oo we ° * i z ro ro â ZS) ey tee = no} v | AP] â os ~ ~ ~ Vv Je tenation T= u s(r) =W? tanh(W'r +b!) +b? Bennett W chr ayoP
Figure 1: CharWNN Architecture | 1505.05008#10 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 11 | Figure 1: CharWNN Architecture
vector dchr, the number of convolutional units clu (which corresponds to the size of the character- level embedding of a word), and the size of the character context window kchr are hyperparame- ters.
Next, the vector r is processed by two usual neural network layers, which extract one more level of representation and compute the scores:
s(wn) = W 2h(W 1r + b1) + b2 (2)
# 2.2 Scoring and Structured Inference
We follow Collobert et al.âs (Collobert et al., 2011) window approach to score all tags T for each word in a sentence. This approach follows the assump- tion that in sequential classiï¬cation the tag of a word depends mainly on its neighboring words. Given a sentence with N words {w1, w2, ..., wN }, which have been converted to joint word-level and character-level embedding {u1, u2, ..., uN }, to compute tag scores for the n-th word wn in the sentence, we ï¬rst create a vector r resulting from the concatenation of a sequence of kwrd embed- dings, centralized in the n-th word: | 1505.05008#11 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 12 | where matrices W 1 â RhluÃkwrd(dwrd+clu) and W 2 â R|T |Ãhlu, and vectors b1 â Rhlu and b2 â R|T | are parameters to be learned. The trans- fer function h(.) is the hyperbolic tangent. The size of the context window kwrd and the number of hidden units hlu are hyperparameters to be cho- sen by the user.
Like in (Collobert et al., 2011), CharWNN uses a prediction scheme that takes into account the sentence structure. The method uses a transi- tion score Atu for jumping from tag t â T to u â T in successive words, and a score A0t for starting from the t-th tag. Given the sentence [w]N 1 = {w1, w2, ..., wN }, the score for tag path [t]N 1 = {t1, t2, ..., tN } is computed as follows:
r= (un â(ertâay2s seey Un (aerdâ1)/2)
# T
We use a special padding token for the words with indices outside of the sentence boundaries.
n=1 (3) S ((wl [é | 1505.05008#12 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 13 | # T
We use a special padding token for the words with indices outside of the sentence boundaries.
n=1 (3) S ((wl [é
where s(w,)z,, is the score given for tag t,, at word Wr, and @ is the set of all trainable network param- eters (WY'4 Weer, W°, 6°, W1, bt, W?, b?, A). After scoring each word in the sentence, the pre- dicted sequence is inferred with the Viterbi algo- rithm.
# 2.3 Network Training
We train CharWNN by minimizing a negative likelihood over the training set D. In the same way as in (Collobert et al., 2011), we interpret the sen- tence score (3) as a conditional probability over a path. For this purpose, we exponentiate the score (3) and normalize it with respect to all possible paths. Taking the log, we arrive at the following conditional log-probability:
log p ([tH ulâ, 8) = 5 (leh ti". 8) Sy esllultâ v[ulN ETN âlog | 1505.05008#13 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 14 | log p ([tH ulâ, 8) = 5 (leh ti". 8) Sy esllultâ v[ulN ETN âlog
The log-likelihood in Equation 4 can be com- puted efï¬ciently using dynamic programming (Collobert, 2011). We use stochastic gradient descent (SGD) to minimize the negative log- likelihood with respect to θ. We use the backprop- agation algorithm to compute the gradients of the neural network. We implemented CharWNN us- ing the Theano library (Bergstra et al., 2010).
# 3 Experimental Setup
# 3.1 Unsupervised Learning of Word Embeddings
The word embeddings used in our experiments are initialized by means of unsupervised pre- training. We perform pre-training of word- level embeddings using the skip-gram NN archi- tecture (Mikolov et al., 2013) available in the word2vec 1 tool.
In our experiments on Portuguese NER, we use the word-level embeddings previously trained by (dos Santos and Zadrozny, 2014). They have used a corpus composed of the Portuguese Wikipedia, the CETENFolha2 corpus and the CETEMPub- lico3 corpus. | 1505.05008#14 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 15 | In our experiments on Spanish NER, we use the Spanish Wikipedia. We process the Span- ish Wikipedia corpus using the same steps used
# 1http://code.google.com/p/word2vec/ 2http://www.linguateca.pt/cetenfolha/ 3http://www.linguateca.pt/cetempublico/
(4)
by (dos Santos and Zadrozny, 2014): (1) remove paragraphs that are not in Spanish; (2) substitute non-roman characters by a special character; (3) tokenize the text using a tokenizer that we have implemented; (4) remove sentences that are less than 20 characters long (including white spaces) or have less than 5 tokens; (5) lowercase all words and substitute each numerical digit by a 0. The re- sulting corpus contains around 450 million tokens. Following (dos Santos and Zadrozny, 2014), we do not perform unsupervised learning of character- level embeddings. The character-level embed- dings are initialized by randomly sampling each value from an uniform distribution: U (âr, r),
where r = 6 |V chr| + dchr .
# 3.2 Corpora | 1505.05008#15 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 16 | where r = 6 |V chr| + dchr .
# 3.2 Corpora
We use the corpus from the ï¬rst HAREM evaluation (Santos and Cardoso, 2007) in our experiments on Portuguese NER. This corpus is annotated with ten named entity categories: Person (PESSOA), Organization (ORGANIZA- CAO), Location (LOCAL), Value (VALOR), Date (TEMPO), Abstraction (ABSTRACCAO), Title (OBRA), Event (ACONTECIMENTO), Thing (COISA) and Other (OUTRO). The HAREM cor- pus is already divided into two subsets: First HAREM and MiniHAREM. Each subset corre- sponds to a different Portuguese NER contest. In our experiments, we call HAREM I the setup where we use the First HAREM corpus as the training set and the MiniHAREM corpus as the test set. This is the same setup used by dos Santos and Milidi´u (2012). Additionally, we tokenize the HAREM corpus and create a development set that comprises 5% of the training set. Table 1 present some details of this dataset. | 1505.05008#16 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 17 | In our experiments on Spanish NER we use the SPA CoNLL-2002 Corpus, which was de- veloped for the CoNLL-2002 shared task (Tjong Kim Sang, 2002). It is annotated with four named entity categories: Person, Organization, Location and Miscellaneous. The SPA CoNLL-2002 corpus is already divided into training, development and test sets. The development set has characteristics similar to the test corpora.
We treat NER as a sequential classiï¬cation problem. Hence, in both corpora we use the IOB2 tagging style where: O, means that the word is not a NE; B-X is used for the leftmost word of a NE type X; and I-X means that the word is inside of a NE type X. The IOB2 tagging style is illustrated
# Table 1: Named Entity Recognition Corpora.
Training Data Test Data Corpus Language HAREM I SPA CoNLL-2002 Spanish Portuguese Sentenc. Tokens Sentenc. Tokens 3,393 62,914 1,517 51,533 93,125 4,749 8,323 264,715
# in the following example. | 1505.05008#17 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 18 | # in the following example.
Wolff/B-PER ,/O currently/O a/O journalist/O played/O in/O in/O Argentina/B-LOC ,/O Del/B-PER Bosque/I-PER of/O the/O with/O the/O seventies/O final/O years/O in/O Madrid/I-ORG Real/B-ORG
# 3.3 Model Setup
In most of our experiments, we use the same hy- perparameters used by dos Santos and Zadrozny (2014) for part-of-speech tagging. The only ex- ception is the learning rate for SPA CoNLL-2002, which we set to 0.005 in order to avoid diver- gence. The hyperparameter values are presented in Table 2. We use the development sets to deter- mine the number of training epochs, which is six for HAREM and sixteen for SPA CoNLL-2002. | 1505.05008#18 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 19 | We compare CharWNN with two similar neu- ral network architectures: CharNN and WNN. CharNN is equivalent to CharWNN without word embeddings, i.e., it uses character-level embed- dings only. WNN is equivalent to CharWNN with- out character-level embeddings, i.e., it uses word embeddings only. Additionally, in the same way as in (Collobert et al., 2011), we check the impact of adding to WNN two handcrafted features that contain character-level information, namely cap- italization and sufï¬x. The capitalization feature has ï¬ve possible values: all lowercased, ï¬rst up- percased, all uppercased, contains an uppercased letter, and all other cases. We use sufï¬x of size three. In our experiments, both capitalization and sufï¬x embeddings have dimension ï¬ve. The hy- perparameters values for these two NNs are shown in Table 2.
# 4 Experimental Results | 1505.05008#19 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 20 | # 4 Experimental Results
best precision, recall and F1 in both development and test sets. For the test set, the F1 of CharWNN is 3 points larger than the F1 of the WNN that uses two additional handcrafted features: sufï¬xes and capitalization. This result suggests that, for the NER task, the character-level embeddings are as or more effective as the two character-level fea- tures used in WNN. Similar results were obtained by dos Santos and Zadrozny (2014) in the POS tagging task.
In the two last lines of Table 3 we can see the results of using word embeddings and character- level embeddings separately. Both, WNN that uses word embeddings only and CharNN, do not achieve results competitive with the results of the networks that jointly use word-level and character- level information. This is not surprising, since it is already known in the NLP community that jointly using word-level and character-level fea- tures is important to perform named entity recog- nition. | 1505.05008#20 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 21 | In Table 4, we compare CharWNN results with the ones of a state-of-the-art system for the SPA CoNLL-2002 Corpus. This system was trained us- ing AdaBoost and is described in (Carreras et al., 2002). It employs decision trees as a base learner and uses handcrafted features as input. Among others, these features include gazetteers with peo- ple names and geographical location names. The AdaBoost based system divide the NER task into two intermediate sub-tasks: NE identiï¬cation and NE classiï¬cation. In the ï¬rst sub-task, the system identiï¬es NE candidates. In the second sub-task, the system classiï¬es the identiï¬ed candidates. In Table 4, we can see that even using only automat- ically learned features, CharWNN achieves state- of-the-art results for the SPA CoNLL-2002. This is an impressive result, since NER is a challenging task to perform without the use of gazetteers.
# 4.1 Results for Spanish NER
In Table 3, we report the performance of different NNs for the SPA CoNLL-2002 corpus. All results for this corpus were computed using the CoNLL- 2002 evaluation script4. CharWNN achieves the | 1505.05008#21 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 22 | 4http://www.cnts.ua.ac.be/conll2002/ner/bin/conlleval.txt
# 4.2 Results for Portuguese NER
In Table 5, we report the performance of different NNs for the HAREM I corpus. The results in this table were computed using the CoNLL-2002 evalTable 2: Neural Network Hyperparameters.
Parameter Parameter Name dwrd kwrd dchr kchr clu hlu λ Word embedding dimensions Word context window size Char. embedding dimensions Char. context window size Convolutional units Hidden units Learning rate CharWNN WNN CharNN - 5 50 5 200 300 0.0075 100 5 10 5 50 300 0.0075 100 5 - - - 300 0.0075
Table 3: Comparison of different NNs for the SPA CoNLL-2002 corpus. Test Set Rec. 82.21 78.67 68.19 59.03 | 1505.05008#22 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 23 | Table 3: Comparison of different NNs for the SPA CoNLL-2002 corpus. Test Set Rec. 82.21 78.67 68.19 59.03
Dev. Set Rec. 78.68 76.31 68.45 51.40 NN Features Prec. 80.13 78.33 73.87 53.86 F1 Prec. 82.21 79.64 73.77 61.13 CharWNN WNN WNN CharNN word emb., char emb. word emb., sufï¬x, capit. word embeddings char embeddings 79.40 77.30 71.06 52.60 F1 82.21 79.15 70.87 60.06
Table 4: Comparison with the state-of-the-art for the SPA CoNLL-2002 corpus. System CharWNN
Features word embeddings, char embeddings words, ortographic, POS tags, trigger words, bag-of-words, gazetteers, word sufï¬xes, word type patterns, entity length Prec. 82.21 Rec. 82.21 AdaBoost 81.38 81.40 F1 82.21 81.39 | 1505.05008#23 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 24 | uation script. We report results in two scenarios: In the total scenario, all ten total and selective. categories are taken into account when scoring the systems. In the selective scenario, only ï¬ve chosen categories (Person, Organization, Location, Date and Value) are taken into account. We can see in Table 5, that CharWNN and WNN that uses two additional handcrafted features have similar results. We think that by increasing the training data, CharWNN has the potential to learn better character embeddings and outperform WNN, like happens in the SPA CoNLL-2002 corpus, which is larger than the HAREM I corpus. Again, CharNN and WNN that uses word embeddings only, do not achieve results competitive with the results of the networks that jointly use word-level and character- level information.
2007), which uses a scoring strategy different from the CoNLL-2002 evaluation script. | 1505.05008#24 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 25 | 2007), which uses a scoring strategy different from the CoNLL-2002 evaluation script.
In Table 6, we compare CharWNN results with the ones of ETLCM T , a state-of-the-art system for the HAREM I Corpus (dos Santos and Milidi´u, 2012). ETLCM T is an ensemble method that uses Entropy Guided Transformation Learning (ETL) as the base learner. The ETLCM T system uses handcrafted features like gazetteers and dictionar- ies as well as the output of other NLP tasks such as POS tagging and noun phrase (NP) chunking. As we can see in Table 6, CharWNN outperforms the state-of-the-art system by a large margin in both total and selective scenarios, which is an remark- able result for a system that uses automatically learned features only.
In order to compare CharWNN results with the one of the state-of-the-art system, we report in tables 6 and 7 the precision, recall, and F1 scores computed with the evaluation scripts from the HAREM I competition5 (Santos and Cardoso, | 1505.05008#25 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 26 | In Table 7, we compare CharWNN results by entity type with the ones of ETLCM T . These results were computed in the selective scenario. CharWNN produces a much better recall than ETLCM T for the classes LOC, PER and ORG. For the ORG entity, the improvement is of 21 points
5http://www.linguateca.pt/primeiroHAREM/harem Arquitectura.html
Table 5: Comparison of different NNs for the HAREM I corpus.
NN CharWNN WNN WNN CharNN Features word emb., char emb. word emb., sufï¬x, capit. word embeddings char embeddings Total Scenario Rec. 63.74 63.16 53.23 50.65 Selective Scenario Rec. 68.68 68.35 58.77 54.54 Prec. 67.16 68.52 63.32 57.10 F1 Prec. 73.98 75.05 68.91 66.30 F1 71.23 71.54 63.44 59.85 65.41 65.73 57.84 53.68
Table 6: Comparison with the State-of-the-art for the HAREM I corpus. | 1505.05008#26 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 27 | Table 6: Comparison with the State-of-the-art for the HAREM I corpus.
System CharWNN ETLCM T Features word emb., char emb. words, POS tags, NP tags, capitalization, word length, dictionaries, gazetteers Total Scenario Rec. 68.53 Selective Scenario Rec. 77.49 Prec. 74.54 F1 Prec. 78.38 F1 77.93 71.41 77.52 53.86 63.56 77.27 65.20 70.72
in the recall. We believe that a large part of this boost in the recall is due to the unsupervised pre- training of word embeddings, which can leverage large amounts of unlabeled data to produce reli- able word representations.
# Impact of unsupervised pre-training of word embeddings
classiï¬ed. They apply their system for a Chinese corpus and achieve state-of-the-art results for the NE categorization task.
Collobert et al. (2011) propose a deep neural network which is equivalent to the WNN architec- ture described in Section 3.3. They achieve state- of-the-art results for English NER by adding a fea- ture based on gazetteer information. | 1505.05008#27 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 28 | In Table 8 we assess the impact of unsuper- vised pre-training of word embeddings in Char- WNN performance for both SPA CoNLL-2002 and HAREM I (selective). The results were com- puted using the CoNLL-2002 evaluation script. For both corpora, CharWNN results are improved when using unsupervised pre-training. The im- pact of unsupervised pre-training is larger for the HAREM I corpus (13.2 points in the F1) than for the SPA CoNLL-2002 (4.3 points in the F1). We believe one of the main reasons of this difference in the impact is the training set size, which is much smaller in the HAREM I corpus.
# 5 Related Work | 1505.05008#28 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 29 | # 5 Related Work
(2014) extend the Skip-Gram language model (Mikolov et al., 2013) to pro- duce phrase embeddings that are more suitable to be used in a linear-chain CRF to perform NER. Their linear-chain CRF, which also uses additional handcrafted features such as gazetteer based, achieves state-of-the-art results on two En- glish corpora: CoNLL 2003 and Ontonotes NER. The main difference between our approach and the ones proposed in previous work is the use of neural character embeddings. This type of em- bedding allows us to achieve state-of-the-art re- sults for the full task of identifying and classify- ing named entities using only features automati- cally learned. Additionally, we perform experi- ments with two different languages, while previ- ous work focused in one language.
Some recent work on deep learning for named en- tity recognition include Chen et al. (2010), Col- lobert et al. (2011) and Passos et al. (2014). | 1505.05008#29 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 30 | Chen et al. (2010) employ deep belief networks (DBN) to perform named entity categorization. In their system, they assume that the boundaries of all the entity mentions were previously identiï¬ed, which makes their task easier than the one we tackle in this paper. The input for their model is the character-level information of the entity to be
# 6 Conclusions
In this work we approach language-independent NER using a DNN that employs word- and character-level embeddings to perform sequential classiï¬cation. We demonstrate that the same DNN which was successfully applied for POS tagging can also achieve state-of-the-art results for NER,
Table 7: Results by entity type for the HAREM I corpus.
Entity Prec. 90.27 76.91 70.65 81.35 VALUE 78.08 78.38 Overall DATE LOC ORG PER CharWNN Rec. 81.32 78.55 71.56 77.07 74.99 77.49 F1 Prec. 88.29 76.18 65.34 81.49 77.72 77.27 85.56 77.72 71.10 79.15 76.51 77.93 ETLCM T Rec. 82.21 68.16 50.29 61.14 70.13 65.20 F1 85.14 71.95 56.84 69.87 73.73 70.72 | 1505.05008#30 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 31 | Table 8: Impact of unsup. pre-training of word emb. in CharWNN performance.
Corpus SPA CoNLL-2002 HAREM I Pre-trained word emb. Precision Recall 82.21 Yes 77.63 No 68.68 Yes 52.27 No 82.21 78.21 73.98 65.21 F1 82.21 77.92 71.23 58.03
using the same hyperparameters, and without any handcrafted features. Moreover, we shade some light on the contribution of neural character em- beddings for NER; and deï¬ne new state-of-the-art results for Portuguese and Spanish NER.
[Doddington et al.2004] George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction (ace) program In Proceedings of tasks, data, and evaluation. the Fourth International Conference on Language Resources and Evaluation (LREC-2004), Lisbon, Portugal, May.
# References | 1505.05008#31 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 32 | # References
[Bergstra et al.2010] James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde- Farley, and Yoshua Bengio. 2010. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientiï¬c Computing Conference (SciPy).
[Carreras et al.2002] Xavier Carreras, Llu´ıs M`arques, and Llu´ıs Padr´o. 2002. Named entity extraction us- ing adaboost. In Proceedings of CoNLL-2002, pages 167â170. Taipei, Taiwan.
dos Entropy Santos and Ruy Luiz Milidi´u. Guided Transformation Learning - Algorithms and Applications. Springer Briefs in Computer Science. Springer.
[dos Santos and Zadrozny2014] C´ıcero Nogueira dos Santos and Bianca Zadrozny. Learning character-level representations for part-of-speech In Proceedings of the 31st International tagging. Conference on Machine Learning, JMLR: W&CP volume 32, Beijing, China. | 1505.05008#32 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 33 | [Chen et al.2010] Yu Chen, You Ouyang, Wenjie Li, Dequan Zheng, and Tiejun Zhao. 2010. Using deep belief nets for chinese named entity categoriza- In Proceedings of the Named Entities Work- tion. shop, pages 102â109.
[Finkel et al.2005] Jenny Rose Finkel, Trond Grenager, and Christopher Manning. Incorporating non-local information into information extraction In Proceedings of the systems by gibbs sampling. 43rd Annual Meeting on Association for Computa- tional Linguistics, pages 363â370.
[Collobert et al.2011] R. Collobert, J. Weston, L. Bot- tou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from Journal of Machine Learning Research, scratch. 12:2493â2537.
[Lecun et al.1998] Yann Lecun, Lon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based In Pro- learning applied to document recognition. ceedings of the IEEE, pages 2278â2324. | 1505.05008#33 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 34 | [Collobert2011] R. Collobert. 2011. Deep learning for efï¬cient discriminative parsing. In Proceedings of the Fourteenth International Conference on Ar- tiï¬cial Intelligence and Statistics (AISTATS), pages 224â232.
[Mikolov et al.2013] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efï¬cient estima- tion of word representations in vector space. In Pro- ceedings of Workshop at International Conference on Learning Representations.
Julio Cesar Duarte, and Roberto Cavalcante. 2007. Machine learning algorithms for portuguese named entity recognition. Revista Iberoamericana de Inteligen- cia Artiï¬cial, pages 67â75.
[Passos et al.2014] Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. In Proceedings of the Eighteenth Conference on Com- putational Natural Language Learning, pages 78â 86, Ann Arbor, Michigan.
[Santos and Cardoso2007] Diana Santos and Nuno Car- doso. 2007. Reconhecimento de entidades men- cionadas em portuguËes. Linguateca, Portugal. | 1505.05008#34 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.05008 | 35 | [Tang et al.2014] Buzhou Tang, Hongxin Cao, Xiao- long Wang, Qingcai Chen, and Hua Xu. 2014. Eval- uating word representation features in biomedical named entity recognition tasks. BioMed Research International, 2014.
[Tjong Kim Sang and De Meulder2003] Erik F. Tjong Intro- Kim Sang and Fien De Meulder. duction to the conll-2003 shared task: Language- independent named entity recognition. In Walter Daelemans and Miles Osborne, editors, Proceed- ings of CoNLL-2003, pages 142â147. Edmonton, Canada.
[Tjong Kim Sang2002] Erik F. Tjong Kim Sang. 2002. Introduction to the conll-2002 shared task: Language-independent named entity recognition. In Proceedings of CoNLL-2002, pages 155â158. Taipei, Taiwan.
[Viterbi1967] A. J. Viterbi. 1967. Error bounds for convolutional codes and an asymptotically optimum IEEE Transactions on Infor- decoding algorithm. mation Theory, 13(2):260â269, April. | 1505.05008#35 | Boosting Named Entity Recognition with Neural Character Embeddings | Most state-of-the-art named entity recognition (NER) systems rely on
handcrafted features and on the output of other NLP tasks such as
part-of-speech (POS) tagging and text chunking. In this work we propose a
language-independent NER system that uses automatically learned features only.
Our approach is based on the CharWNN deep neural network, which uses word-level
and character-level representations (embeddings) to perform sequential
classification. We perform an extensive number of experiments using two
annotated corpora in two different languages: HAREM I corpus, which contains
texts in Portuguese; and the SPA CoNLL-2002 corpus, which contains texts in
Spanish. Our experimental results shade light on the contribution of neural
character embeddings for NER. Moreover, we demonstrate that the same neural
network which has been successfully applied to POS tagging can also achieve
state-of-the-art results for language-independet NER, using the same
hyperparameters, and without any handcrafted features. For the HAREM I corpus,
CharWNN outperforms the state-of-the-art system by 7.9 points in the F1-score
for the total scenario (ten NE classes), and by 7.2 points in the F1 for the
selective scenario (five NE classes). | http://arxiv.org/pdf/1505.05008 | Cicero Nogueira dos Santos, Victor Guimarães | cs.CL | 9 pages | null | cs.CL | 20150519 | 20150525 | [] |
1505.00853 | 1 | [email protected]
# Tianqi Chen University of Washington
[email protected]
# Mu Li Carnegie Mellon University
[email protected]
# Abstract
In this paper we investigate the performance of diï¬erent types of rectiï¬ed activation func- tions in convolutional neural network: stan- dard rectiï¬ed linear unit (ReLU), leaky rec- tiï¬ed linear unit (Leaky ReLU), parametric rectiï¬ed linear unit (PReLU) and a new ran- domized leaky rectiï¬ed linear units (RReLU). We evaluate these activation function on standard image classiï¬cation task. Our ex- periments suggest that incorporating a non- zero slope for negative part in rectiï¬ed acti- vation units could consistently improve the results. Thus our ï¬ndings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic neg- ative slope or learning it are both prone to overï¬tting. They are not as eï¬ective as us- ing their randomized counterpart. By us- ing RReLU, we achieved 75.68% accuracy on CIFAR-100 test set without multiple test or ensemble. | 1505.00853#1 | Empirical Evaluation of Rectified Activations in Convolutional Network | In this paper we investigate the performance of different types of rectified
activation functions in convolutional neural network: standard rectified linear
unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified
linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU).
We evaluate these activation function on standard image classification task.
Our experiments suggest that incorporating a non-zero slope for negative part
in rectified activation units could consistently improve the results. Thus our
findings are negative on the common belief that sparsity is the key of good
performance in ReLU. Moreover, on small scale dataset, using deterministic
negative slope or learning it are both prone to overfitting. They are not as
effective as using their randomized counterpart. By using RReLU, we achieved
75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble. | http://arxiv.org/pdf/1505.00853 | Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li | cs.LG, cs.CV, stat.ML | null | null | cs.LG | 20150505 | 20151127 | [
{
"id": "1502.03167"
},
{
"id": "1501.04587"
},
{
"id": "1502.01852"
}
] |
1505.00855 | 1 | ]
# V C . s c [
1 v 5 5 8 0 0 . 5 0 5 1 : v i X r a
Abstract. In the past few years, the number of ï¬ne-art collections that are dig- itized and publicly available has been growing rapidly. With the availability of such large collections of digitized artworks comes the need to develop multime- dia systems to archive and retrieve this pool of data. Measuring the visual similar- ity between artistic items is an essential step for such multimedia systems, which can beneï¬t more high-level multimedia tasks. In order to model this similarity between paintings, we should extract the appropriate visual features for paintings and ï¬nd out the best approach to learn the similarity metric based on these fea- tures. We investigate a comprehensive list of visual features and metric learning approaches to learn an optimized similarity measure between paintings. We de- velop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a paintingâs style, genre, and artist, as well as providing simi- larity measures optimized based on the knowledge available in the domain of art historical interpretation. Our experiments show the value of using this similarity measure for the aforementioned prediction tasks.
# 1 Introduction | 1505.00855#1 | Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature | In the past few years, the number of fine-art collections that are digitized
and publicly available has been growing rapidly. With the availability of such
large collections of digitized artworks comes the need to develop multimedia
systems to archive and retrieve this pool of data. Measuring the visual
similarity between artistic items is an essential step for such multimedia
systems, which can benefit more high-level multimedia tasks. In order to model
this similarity between paintings, we should extract the appropriate visual
features for paintings and find out the best approach to learn the similarity
metric based on these features. We investigate a comprehensive list of visual
features and metric learning approaches to learn an optimized similarity
measure between paintings. We develop a machine that is able to make
aesthetic-related semantic-level judgments, such as predicting a painting's
style, genre, and artist, as well as providing similarity measures optimized
based on the knowledge available in the domain of art historical
interpretation. Our experiments show the value of using this similarity measure
for the aforementioned prediction tasks. | http://arxiv.org/pdf/1505.00855 | Babak Saleh, Ahmed Elgammal | cs.CV, cs.IR, cs.LG, cs.MM | 21 pages | null | cs.CV | 20150505 | 20150505 | [] |
1505.00853 | 2 | et al., 2014), object detection(Girshick et al., 2014) and tracking(Wang et al., 2015). Despite its depth, one of the key characteristics of modern deep learn- ing system is to use non-saturated activation function (e.g. ReLU) to replace its saturated counterpart (e.g. sigmoid, tanh). The advantage of using non-saturated activation function lies in two aspects: The ï¬rst is to solve the so called âexploding/vanishing gradientâ. The second is to accelerate the convergence speed.
In all of these non-saturated activation functions, the most notable one is rectiï¬ed linear unit (ReLU) (Nair & Hinton, 2010; Sun et al., 2014). Brieï¬y speaking, it is a piecewise linear function which prunes the nega- tive part to zero, and retains the positive part. It has a desirable property that the activations are sparse af- ter passing ReLU. It is commonly believed that the superior performance of ReLU comes from the spar- In this sity (Glorot et al., 2011; Sun et al., 2014). paper, we want to ask two questions: First, is spar- sity the most important factor for a good performance? Second, can we design better non-saturated activation functions that could beat ReLU? | 1505.00853#2 | Empirical Evaluation of Rectified Activations in Convolutional Network | In this paper we investigate the performance of different types of rectified
activation functions in convolutional neural network: standard rectified linear
unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified
linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU).
We evaluate these activation function on standard image classification task.
Our experiments suggest that incorporating a non-zero slope for negative part
in rectified activation units could consistently improve the results. Thus our
findings are negative on the common belief that sparsity is the key of good
performance in ReLU. Moreover, on small scale dataset, using deterministic
negative slope or learning it are both prone to overfitting. They are not as
effective as using their randomized counterpart. By using RReLU, we achieved
75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble. | http://arxiv.org/pdf/1505.00853 | Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li | cs.LG, cs.CV, stat.ML | null | null | cs.LG | 20150505 | 20151127 | [
{
"id": "1502.03167"
},
{
"id": "1501.04587"
},
{
"id": "1502.01852"
}
] |
1505.00855 | 2 | # 1 Introduction
In the past few years, the number of ï¬ne-art collections that are digitized and publicly available has been growing rapidly. Such collections span classical 1 and modern and contemporary artworks 2. With the availability of such large collections of digitized art- works comes the need to develop multimedia systems to archive and retrieve this pool of data. Typically these collections, in particular early modern ones, come with meta- data in the form of annotations by art historians and curators, including information about each paintingâs artist, style, date, genre, etc. For online galleries displaying con- temporary artwork, there is a need to develop automated recommendation systems that can retrieve âsimilarâ paintings that the user might like to buy. This highlights the need to investigate metrics of visual similarity among digitized paintings that are optimized for the domain of painting.
The ï¬eld of computer vision has made signiï¬cant leaps in getting digital systems to recognize and categorize objects and scenes in images and videos. These advances have been driven by a wide spread need for the technology, since cameras are every- where now. However a person looking at a painting can make sophisticated inferences
# 1 Examples: Wikiart; Arkyves; BBC Yourpainting 2 Examples: Artsy; Behance; Artnet
Babak Saleh, Ahmed Elgammal | 1505.00855#2 | Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature | In the past few years, the number of fine-art collections that are digitized
and publicly available has been growing rapidly. With the availability of such
large collections of digitized artworks comes the need to develop multimedia
systems to archive and retrieve this pool of data. Measuring the visual
similarity between artistic items is an essential step for such multimedia
systems, which can benefit more high-level multimedia tasks. In order to model
this similarity between paintings, we should extract the appropriate visual
features for paintings and find out the best approach to learn the similarity
metric based on these features. We investigate a comprehensive list of visual
features and metric learning approaches to learn an optimized similarity
measure between paintings. We develop a machine that is able to make
aesthetic-related semantic-level judgments, such as predicting a painting's
style, genre, and artist, as well as providing similarity measures optimized
based on the knowledge available in the domain of art historical
interpretation. Our experiments show the value of using this similarity measure
for the aforementioned prediction tasks. | http://arxiv.org/pdf/1505.00855 | Babak Saleh, Ahmed Elgammal | cs.CV, cs.IR, cs.LG, cs.MM | 21 pages | null | cs.CV | 20150505 | 20150505 | [] |
1505.00853 | 3 | # 1. Introduction
Convolutional neural network (CNN) has made great success in various computer vision tasks, such as im- age classiï¬cation (Krizhevsky et al., 2012; Szegedy
We consider a broader class of activation functions, namely the rectiï¬ed unit family. In particular, we are interested in the leaky ReLU and its variants. In con- trast to ReLU, in which the negative part is totally dropped, leaky ReLU assigns a noon-zero slope to it. The ï¬rst variant is called parametric rectiï¬ed linear unit (PReLU) (He et al., 2015). In PReLU, the slopes of negative part are learned form data rather than pre- deï¬ned. The authors claimed that PReLU is the key factor of surpassing human-level performance on Im- ageNet classiï¬cation (Russakovsky et al., 2015) task.
Empirical Evaluation of Rectiï¬ed Activations in Convolutional Network | 1505.00853#3 | Empirical Evaluation of Rectified Activations in Convolutional Network | In this paper we investigate the performance of different types of rectified
activation functions in convolutional neural network: standard rectified linear
unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified
linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU).
We evaluate these activation function on standard image classification task.
Our experiments suggest that incorporating a non-zero slope for negative part
in rectified activation units could consistently improve the results. Thus our
findings are negative on the common belief that sparsity is the key of good
performance in ReLU. Moreover, on small scale dataset, using deterministic
negative slope or learning it are both prone to overfitting. They are not as
effective as using their randomized counterpart. By using RReLU, we achieved
75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble. | http://arxiv.org/pdf/1505.00853 | Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li | cs.LG, cs.CV, stat.ML | null | null | cs.LG | 20150505 | 20151127 | [
{
"id": "1502.03167"
},
{
"id": "1501.04587"
},
{
"id": "1502.01852"
}
] |
1505.00855 | 3 | # 1 Examples: Wikiart; Arkyves; BBC Yourpainting 2 Examples: Artsy; Behance; Artnet
Babak Saleh, Ahmed Elgammal
G jay] Syl 9¢.| Styte-based fee] pe Lo] ciastter | figh Prsecton (svM) Renaissance 5 & % H _â_ F &: Gan] F: Style-based |=] ¢, tassifier |> pone â % | âTearned P| Sewn, | Port A Projection 5 | § & 2 ie g al $ style bated co, [eee = | 9) Stlebased 15S | ee b>! chester Leonardo & Prejection (svm) Da Vinci
Fig. 1: Illustration of our system for classiï¬cation of ï¬ne-art paintings. We investigated variety of visual features and metric learning approaches to recognize Style, Genre and Artist of a painting. | 1505.00855#3 | Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature | In the past few years, the number of fine-art collections that are digitized
and publicly available has been growing rapidly. With the availability of such
large collections of digitized artworks comes the need to develop multimedia
systems to archive and retrieve this pool of data. Measuring the visual
similarity between artistic items is an essential step for such multimedia
systems, which can benefit more high-level multimedia tasks. In order to model
this similarity between paintings, we should extract the appropriate visual
features for paintings and find out the best approach to learn the similarity
metric based on these features. We investigate a comprehensive list of visual
features and metric learning approaches to learn an optimized similarity
measure between paintings. We develop a machine that is able to make
aesthetic-related semantic-level judgments, such as predicting a painting's
style, genre, and artist, as well as providing similarity measures optimized
based on the knowledge available in the domain of art historical
interpretation. Our experiments show the value of using this similarity measure
for the aforementioned prediction tasks. | http://arxiv.org/pdf/1505.00855 | Babak Saleh, Ahmed Elgammal | cs.CV, cs.IR, cs.LG, cs.MM | 21 pages | null | cs.CV | 20150505 | 20150505 | [] |
1505.00853 | 4 | Empirical Evaluation of Rectiï¬ed Activations in Convolutional Network
The second variant is called randomized rectiï¬ed lin- ear unit (RReLU). In RReLU, the slopes of negative parts are randomized in a given range in the training, and then ï¬xed in the testing. In a recent Kaggle Na- tional Data Science Bowl (NDSB) competition1, it is reported that RReLU could reduce overï¬tting due to its randomized nature.
In this paper, we empirically evaluate these four kinds of activation functions. Based on our experiment, we conclude on small dataset, Leaky ReLU and its vari- ants are consistently better than ReLU in convolu- tional neural networks. RReLU is favorable due to its randomness in training which reduces the risk of overï¬tting. While in case of large dataset, more inves- tigation should be done in future.
# 2. Rectiï¬ed Units | 1505.00853#4 | Empirical Evaluation of Rectified Activations in Convolutional Network | In this paper we investigate the performance of different types of rectified
activation functions in convolutional neural network: standard rectified linear
unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified
linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU).
We evaluate these activation function on standard image classification task.
Our experiments suggest that incorporating a non-zero slope for negative part
in rectified activation units could consistently improve the results. Thus our
findings are negative on the common belief that sparsity is the key of good
performance in ReLU. Moreover, on small scale dataset, using deterministic
negative slope or learning it are both prone to overfitting. They are not as
effective as using their randomized counterpart. By using RReLU, we achieved
75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble. | http://arxiv.org/pdf/1505.00853 | Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li | cs.LG, cs.CV, stat.ML | null | null | cs.LG | 20150505 | 20151127 | [
{
"id": "1502.03167"
},
{
"id": "1501.04587"
},
{
"id": "1502.01852"
}
] |
1505.00855 | 4 | beyond just recognizing a tree, a chair, or the ï¬gure of Christ. Even individuals without speciï¬c art historical training can make assumptions about a paintingâs genre (portrait or landscape), its style (impressionist or abstract), what century it was created, the artists who likely created the work and so on. Obviously, the accuracy of such assumptions depends on the viewerâs level of knowledge and exposure to art history. Learning and judging such complex visual concepts is an impressive ability of human perception [2]. The ultimate goal of our research is to develop a machine that is able to make aesthetic-related semantic-level judgments, such as predicting a paintingâs style, genre, and artist, as well as providing similarity measures optimized based on the knowledge available in the domain of art historical interpretation. Immediate questions that arise include, but are not limited to: What visual features should be used to encode informa- tion in images of paintings? How does one weigh different visual features to achieve a useful similarity measure? What type of art historical knowledge should be used to optimize such similarity measures? In this paper we address these questions and aim to provide answers that can beneï¬t researchers in the area of computer-based analysis of art. Our work is based on a systematic methodology and a comprehensive evaluation on one of the largest available digitized art datasets. | 1505.00855#4 | Large-scale Classification of Fine-Art Paintings: Learning The Right Metric on The Right Feature | In the past few years, the number of fine-art collections that are digitized
and publicly available has been growing rapidly. With the availability of such
large collections of digitized artworks comes the need to develop multimedia
systems to archive and retrieve this pool of data. Measuring the visual
similarity between artistic items is an essential step for such multimedia
systems, which can benefit more high-level multimedia tasks. In order to model
this similarity between paintings, we should extract the appropriate visual
features for paintings and find out the best approach to learn the similarity
metric based on these features. We investigate a comprehensive list of visual
features and metric learning approaches to learn an optimized similarity
measure between paintings. We develop a machine that is able to make
aesthetic-related semantic-level judgments, such as predicting a painting's
style, genre, and artist, as well as providing similarity measures optimized
based on the knowledge available in the domain of art historical
interpretation. Our experiments show the value of using this similarity measure
for the aforementioned prediction tasks. | http://arxiv.org/pdf/1505.00855 | Babak Saleh, Ahmed Elgammal | cs.CV, cs.IR, cs.LG, cs.MM | 21 pages | null | cs.CV | 20150505 | 20150505 | [] |
1505.00853 | 5 | # 2. Rectiï¬ed Units
In this section, we introduce the four kinds of rectiï¬ed units: rectiï¬ed linear (ReLU), leaky rectiï¬ed linear (Leaky ReLU), parametric rectiï¬ed linear (PReLU) and randomized rectiï¬ed linear (RReLU). We illus- trate them in Fig.1 for comparisons. In the sequel, we use xji to denote the input of ith channel in jth example , and yji to denote the corresponding output after passing the activation function. In the following subsections, we introduce each rectiï¬ed unit formally.
# 2.2. Leaky Rectiï¬ed Linear Unit
Leaky Rectiï¬ed Linear activation is ï¬rst introduced in acoustic model(Maas et al., 2013). Mathematically, we have
# Li
if xi ⥠0 if xi < 0, yi = (2)
xi ai where ai is a ï¬xed parameter in range (1, +â). In original paper, the authors suggest to set ai to a large number like 100. In additional to this setting, we also experiment smaller ai = 5.5 in our paper.
# 2.3. Parametric Rectiï¬ed Linear Unit | 1505.00853#5 | Empirical Evaluation of Rectified Activations in Convolutional Network | In this paper we investigate the performance of different types of rectified
activation functions in convolutional neural network: standard rectified linear
unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified
linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU).
We evaluate these activation function on standard image classification task.
Our experiments suggest that incorporating a non-zero slope for negative part
in rectified activation units could consistently improve the results. Thus our
findings are negative on the common belief that sparsity is the key of good
performance in ReLU. Moreover, on small scale dataset, using deterministic
negative slope or learning it are both prone to overfitting. They are not as
effective as using their randomized counterpart. By using RReLU, we achieved
75.68\% accuracy on CIFAR-100 test set without multiple test or ensemble. | http://arxiv.org/pdf/1505.00853 | Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li | cs.LG, cs.CV, stat.ML | null | null | cs.LG | 20150505 | 20151127 | [
{
"id": "1502.03167"
},
{
"id": "1501.04587"
},
{
"id": "1502.01852"
}
] |