a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
45,575,362 | <p>Let's assume that your sentence-similarity scheme uses only word-vectors as an input – as in simple word-vector averaging schemes, or Word Mover's Distance. </p>
<p>It should be possible to do what you've suggested, provided that:</p>
<ul>
<li>you have good sets of word-vectors for each language's words</li>
<li>the coordinate spaces of the word-vectors are compatible, meaning the words for the exact-same things in both languages have nearly-identical coordinates (and other words with similar meanings have close coordinates)</li>
</ul>
<p>That second quality is not automatically assured. In fact, given the random initialization of word2vec models, and other randomization introduced by the algorithm/implementation, even subsequent training runs on the exact same data won't place words into the exact same places. So word-vectors trained on totally-separate English/Dutch corpuses won't likely place equivalent words at the same coordinates. </p>
<p>But, you can learn an algebraic-transformation between two spaces, based on certain anchor/reference word-pairs (that you know should have similar vectors). You can then apply that transformation to all words in one of the two sets, which results in you having vectors for those 'foreign' words within the comparable coordinate-space of the 'canonical' word-set. </p>
<p>In fact this very idea was used in one of the first word2vec papers:</p>
<p>"<a href="https://arxiv.org/abs/1309.4168" rel="nofollow noreferrer">Exploiting Similarities among Languages for Machine Translation</a>"</p>
<p>If you were to apply a similar transformation on one of your language word-vector sets, then use those transformed vectors as inputs to your sentence-vector scheme, those sentence-vectors would likely have some useful comparability to sentence-vectors in the other language, bootstrapped from word-vectors in the same coordinate-space. </p>
<p><strong>Update:</strong> There's a very interesting <a href="https://arxiv.org/abs/1601.02502" rel="nofollow noreferrer">recent paper</a> that manages to train word-vectors in multiple languages simultaneously, using a corpus that includes both raw sentences in each single language, and a (smaller) set of aligned-sentences that are known to mean the same in both languages. Gensim doesn't yet support this mode, but there's <a href="https://groups.google.com/d/msg/gensim/zksGwKHnIUA/7lde13FbAgAJ" rel="nofollow noreferrer">discussion of supporting it</a> in a future refactor. </p> | 2017-08-08 18:15:39.533000+00:00 | 2017-10-20 17:38:32.357000+00:00 | 2017-10-20 17:38:32.357000+00:00 | null | 45,571,295 | <p>I am using word embeddings for finding similarity between two sentences. Using word2vec, I also get a similarity measure if one sentence is in English and the other one in Dutch (though not very good). </p>
<p>So I started wondering if it's possible to compute the similarity between two sentences in two different languages (without an explicit translation), especially if the languages have some similarities (Englis/Dutch)?</p> | 2017-08-08 14:38:45.970000+00:00 | 2020-05-16 10:58:05.647000+00:00 | 2017-08-08 15:08:50.350000+00:00 | nlp|nltk|gensim|word2vec | ['https://arxiv.org/abs/1309.4168', 'https://arxiv.org/abs/1601.02502', 'https://groups.google.com/d/msg/gensim/zksGwKHnIUA/7lde13FbAgAJ'] | 3 |
53,404,483 | <p>I had the same issue where I wanted to expand all of the abstract sections on Arxiv automatically. Using the Chrome dev console I found this worked exactly like I wanted: </p>
<pre><code>('.abstract-full').css('display','inline')
</code></pre>
<p>Since the "abstract-full" class had display: none.</p> | 2018-11-21 02:26:31.907000+00:00 | 2018-11-21 02:26:31.907000+00:00 | null | null | 51,174,478 | <p>I am interested in finding out if there is a way to expand all the collapsible sections on a webpage simultaneously. A relevant section on the webpage I am looking at looks like this:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><tr>
<td></td>
<td style="..;">
<div style="..">
<span id=".." style="display:inline;"><br />
<div style="display:inline"> ...
</div>
</span>
<span id="toHide1234" style="display:none;"><br />
<div style="display:inline">
<p>.....</p>
</div>
</span>
<a id="expcoll1234" href="JavaScript:expandcollapse('expcoll1234',1234)">
expand
</a>
</div>
</td>
</tr></code></pre>
</div>
</div>
</p>
<p>As seen above, by clicking on <code>expand</code> link, this section would expand. Trouble is, there are hundreds of such <code>expand</code> links on the webpage I am interested in, and there are many such web pages I want to do this for. </p>
<p>Any thoughts on this would be much appreciated. I need a really simple way to do this, as I am not very well versed with web programming. Just know very elementary HTML. </p> | 2018-07-04 13:07:16.470000+00:00 | 2018-11-21 02:26:31.907000+00:00 | 2018-07-05 10:55:18.303000+00:00 | javascript|html|css|webpage | [] | 0 |
36,296,550 | <p>There's a good clear recent paper on this <a href="http://arxiv.org/abs/1508.03167" rel="nofollow noreferrer">here</a> and the references, especially Shun et al 2015 are worth a read.</p>
<p>But basically you can do this using the same sort of approach that's used in <code>sort -R</code>: shuffle by giving each row a random key value and sorting on that key. And there are lots of ways to do good parallel distributed sort.</p>
<p>Here's a basic version in python + MPI using an odd-even sort; it goes through P communication steps if P is the number of processors. You can do better than that, but this is pretty simple to understand; it's discussed in <a href="https://stackoverflow.com/questions/23633916/how-does-mpi-odd-even-sort-work">this question</a>.</p>
<pre class="lang-python prettyprint-override"><code>from __future__ import print_function
import sys
import random
from mpi4py import MPI
comm = MPI.COMM_WORLD
def exchange(localdata, sendrank, recvrank):
"""
Perform a merge-exchange with a neighbour;
sendrank sends local data to recvrank,
which merge-sorts it, and then sends lower
data back to the lower-ranked process and
keeps upper data
"""
rank = comm.Get_rank()
assert rank == sendrank or rank == recvrank
assert sendrank < recvrank
if rank == sendrank:
comm.send(localdata, dest=recvrank)
newdata = comm.recv(source=recvrank)
else:
bothdata = list(localdata)
otherdata = comm.recv(source=sendrank)
bothdata = bothdata + otherdata
bothdata.sort()
comm.send(bothdata[:len(otherdata)], dest=sendrank)
newdata = bothdata[len(otherdata):]
return newdata
def print_by_rank(data, rank, nprocs):
""" crudely attempt to print data coherently """
for proc in range(nprocs):
if proc == rank:
print(str(rank)+": "+str(data))
comm.barrier()
return
def odd_even_sort(data):
rank = comm.Get_rank()
nprocs = comm.Get_size()
data.sort()
for step in range(1, nprocs+1):
if ((rank + step) % 2) == 0:
if rank < nprocs - 1:
data = exchange(data, rank, rank+1)
elif rank > 0:
data = exchange(data, rank-1, rank)
return data
def main():
# everyone get their data
rank = comm.Get_rank()
nprocs = comm.Get_size()
n_per_proc = 5
data = list(range(n_per_proc*rank, n_per_proc*(rank+1)))
if rank == 0:
print("Original:")
print_by_rank(data, rank, nprocs)
# tag your data with random values
data = [(random.random(), item) for item in data]
# now sort it by these random tags
data = odd_even_sort(data)
if rank == 0:
print("Shuffled:")
print_by_rank([x for _, x in data], rank, nprocs)
return 0
if __name__ == "__main__":
sys.exit(main())
</code></pre>
<p>Running gives:</p>
<pre><code>$ mpirun -np 5 python mergesort_shuffle.py
Original:
0: [0, 1, 2, 3, 4]
1: [5, 6, 7, 8, 9]
2: [10, 11, 12, 13, 14]
3: [15, 16, 17, 18, 19]
4: [20, 21, 22, 23, 24]
Shuffled:
0: [19, 17, 4, 20, 9]
1: [23, 12, 3, 2, 8]
2: [14, 6, 13, 15, 1]
3: [11, 0, 22, 16, 18]
4: [5, 10, 21, 7, 24]
</code></pre> | 2016-03-29 22:54:00.753000+00:00 | 2016-04-01 12:36:30.677000+00:00 | 2017-05-23 11:46:40.410000+00:00 | null | 36,266,968 | <p>I am looking to shuffle an array in parallel. I have found that doing an algorithm similar to bitonic sort but with a random (50/50) re-order results in an equal distribution but only if the array is a power of 2. I've considered the Yates Fisher Shuffle but I can't see how I could parallel-ize it in order to avoid O(N) computations.</p>
<p>Any advice?</p>
<p>Thanks!</p> | 2016-03-28 16:59:40.677000+00:00 | 2017-04-08 00:03:23.277000+00:00 | 2016-03-30 05:07:18.117000+00:00 | parallel-processing|shuffle | ['http://arxiv.org/abs/1508.03167', 'https://stackoverflow.com/questions/23633916/how-does-mpi-odd-even-sort-work'] | 2 |
63,427,399 | <p>First of all this is a hyperparameter of your training. Choosing the right batch size is non-trivial and depends on several factors.</p>
<p>Thus if you could afford it, you could try using an hyperparameter optimization approach (e.g. grid search, random search, evolutionary strategies, Bayesian opt. methods such as TPE) to find the "optimal" batch size.</p>
<p>If you cannot afford it, I would suggest considering the insights from this <a href="https://arxiv.org/abs/1609.04836" rel="nofollow noreferrer">paper</a>. Thus, find a good trade-off between your computational constraints and smallest possible batch size.</p> | 2020-08-15 14:58:40.957000+00:00 | 2020-08-15 14:58:40.957000+00:00 | null | null | 63,426,898 | <p>I have 1000000 points of training data. What would be a good batch size to use?
I was thinking 32 but think that would take ages. Its on cpu you as well so dont want to use
to high batch size.</p> | 2020-08-15 14:04:07.823000+00:00 | 2020-08-15 14:58:40.957000+00:00 | null | python|scikit-learn|neural-network | ['https://arxiv.org/abs/1609.04836'] | 1 |
30,840,454 | <p>The skip-gram architecture has word embeddings as its output (and its input). Depending on its precise implementation, the network may therefore produce two embeddings per word (one embedding for the word as an input word, and one embedding for the word as an output word; this is the case in the basic skip-gram architecture with the traditional softmax function), or one embedding per word (this is the case in a setup with the hierarchical softmax as an approximation to the full softmax, for example). </p>
<p>You can find more information about these architectures in the original word2vec papers, such as <a href="http://arxiv.org/pdf/1310.4546.pdf" rel="nofollow">Distributed Representations of Words and Phrases
and their Compositionality</a> by Mikolov et al.</p> | 2015-06-15 08:23:31.300000+00:00 | 2015-06-15 08:23:31.300000+00:00 | null | null | 30,835,737 | <p>In the Word2Vec Skip-gram setup that follows, what is the data setup for the output layer? Is it a matrix that is zero everywhere but with a single "1" in each of the C rows - that represents the words in the C context? </p>
<p><img src="https://i.stack.imgur.com/igSuE.png" alt="enter image description here"></p>
<p><strong>Add to describe Data Setup Question:</strong></p>
<p>Meaning what the dataset would look like that was presented to the NN? Lets consider this to be "what does a single training example look like"?. I assume the <em>total</em> input is a matrix, where each row is a word in the vocabulary (and there is a column for each word as well and each cell is zero except where for the specific word - one hot encoded)? Thus, a single training example is 1xV as shown below (all zeros except for the specific word, whose value is a 1). This aligns with the picture above in that the input is V-dim. I expected that the total input matrix would have duplicated rows however - where the same one-hot encoded vector would be repeated for each time the word was found in the corpus (as the output or target variable would be different).</p>
<p>The Output (target) is more confusing to me. I expected it would exactly mirror the input -- a single training example has a "multi"-hot encoded vector that is zero except is a "1" in C of the cells, denoting that a particular word was in the context of the input word (C = 5 if we are looking, for example, 2 words behind and 3 words ahead of the given input word instance). The picture doesn't seem to agree with this though. I dont understand what appears like C different output layers that share the same W' weight matrix? </p> | 2015-06-14 23:51:18.943000+00:00 | 2015-07-07 03:32:22.380000+00:00 | 2015-06-15 18:46:01.227000+00:00 | machine-learning|nlp|neural-network | ['http://arxiv.org/pdf/1310.4546.pdf'] | 1 |
31,101,320 | <p>Assuming you store your dictionary in a <code>set()</code>, so that <a href="https://wiki.python.org/moin/TimeComplexity" rel="noreferrer">lookup is <strong>O(1)</strong> in the average (worst case <strong>O(n)</strong>)</a>.</p>
<p>You can generate all the valid words at hamming distance 1 from a word:</p>
<pre><code>>>> def neighbours(word):
... for j in range(len(word)):
... for d in string.ascii_lowercase:
... word1 = ''.join(d if i==j else c for i,c in enumerate(word))
... if word1 != word and word1 in words: yield word1
...
>>> {word: list(neighbours(word)) for word in words}
{'bot': ['lot'], 'lol': ['lot'], 'lot': ['bot', 'lol']}
</code></pre>
<p>If <strong>M</strong> is the length of a word, <strong>L</strong> the length of the alphabet (i.e. 26), the <strong>worst case</strong> time complexity of finding neighbouring words with this approach is <strong>O(L*M*N)</strong>.</p>
<p>The time complexity of the "easy way" approach is <strong>O(N^2)</strong>.</p>
<p>When this approach is better? When <code>L*M < N</code>, i.e. if considering only lowercase letters, when <code>M < N/26</code>. (I considered only worst case here)</p>
<p>Note: <a href="http://arxiv.org/pdf/1208.6109.pdf" rel="noreferrer">the average length of an english word is 5.1 letters</a>. Thus, you should consider this approach if your dictionary size is bigger than 132 words.</p>
<p>Probably it is possible to achieve better performance than this. However this was really simple to implement.</p>
<h2>Experimental benchmark:</h2>
<p>The "easy way" algorithm (<strong>A1</strong>):</p>
<pre><code>from itertools import zip_longest
def hammingdist(w1,w2): return sum(1 if c1!=c2 else 0 for c1,c2 in zip_longest(w1,w2))
def graph1(words): return {word: [n for n in words if hammingdist(word,n) == 1] for word in words}
</code></pre>
<p>This algorithm (<strong>A2</strong>):</p>
<pre><code>def graph2(words): return {word: list(neighbours(word)) for word in words}
</code></pre>
<p>Benchmarking code:</p>
<pre><code>for dict_size in range(100,6000,100):
words = set([''.join(random.choice(string.ascii_lowercase) for x in range(3)) for _ in range(dict_size)])
t1 = Timer(lambda: graph1()).timeit(10)
t2 = Timer(lambda: graph2()).timeit(10)
print('%d,%f,%f' % (dict_size,t1,t2))
</code></pre>
<p>Output:</p>
<pre><code>100,0.119276,0.136940
200,0.459325,0.233766
300,0.958735,0.325848
400,1.706914,0.446965
500,2.744136,0.545569
600,3.748029,0.682245
700,5.443656,0.773449
800,6.773326,0.874296
900,8.535195,0.996929
1000,10.445875,1.126241
1100,12.510936,1.179570
...
</code></pre>
<p><img src="https://i.stack.imgur.com/nTF0P.png" alt="data plot"></p>
<p>I ran another benchmark with smaller steps of N to see it closer:</p>
<pre><code>10,0.002243,0.026343
20,0.010982,0.070572
30,0.023949,0.073169
40,0.035697,0.090908
50,0.057658,0.114725
60,0.079863,0.135462
70,0.107428,0.159410
80,0.142211,0.176512
90,0.182526,0.210243
100,0.217721,0.218544
110,0.268710,0.256711
120,0.334201,0.268040
130,0.383052,0.291999
140,0.427078,0.312975
150,0.501833,0.338531
160,0.637434,0.355136
170,0.635296,0.369626
180,0.698631,0.400146
190,0.904568,0.444710
200,1.024610,0.486549
210,1.008412,0.459280
220,1.056356,0.501408
...
</code></pre>
<p><img src="https://i.stack.imgur.com/MYjhx.png" alt="data plot 2"></p>
<p>You see the tradeoff is very low (100 for dictionaries of words with length=3). For small dictionaries the O(N^2) algorithm perform <em>slightly</em> better, but that is easily beat by the O(LMN) algorithm as N grows.</p>
<p>For dictionaries with longer words, the O(LMN) algorithm remains linear in N, it just has a different slope, so the tradeoff moves slightly to the right (130 for length=5).</p> | 2015-06-28 15:12:12.647000+00:00 | 2015-06-28 17:00:30.973000+00:00 | 2015-06-28 17:00:30.973000+00:00 | null | 31,100,623 | <p>I want to build a graph from a list of words with <a href="https://en.wikipedia.org/wiki/Hamming_distance">Hamming distance</a> of (say) 1, or to put it differently, two words are connected if they only differ from one letter (<em>lo<strong>l</em></strong> -> <em>lo<strong>t</em></strong>).</p>
<p>so that given</p>
<p><code>words = [ lol, lot, bot ]</code></p>
<p>the graph would be </p>
<pre><code>{
'lol' : [ 'lot' ],
'lot' : [ 'lol', 'bot' ],
'bot' : [ 'lot' ]
}
</code></pre>
<p>The easy way is to compare every word in the list with every other and count the different chars; sadly, this is a <code>O(N^2)</code> algorithm.</p>
<p>Which algo/ds/strategy can I use to to achieve better performance?</p>
<p>Also, let's assume only latin chars, and all the words have the same length.</p> | 2015-06-28 13:57:19.350000+00:00 | 2015-06-28 17:00:30.973000+00:00 | 2015-06-28 15:11:56.763000+00:00 | python|algorithm|graph-algorithm|hamming-distance | ['https://wiki.python.org/moin/TimeComplexity', 'http://arxiv.org/pdf/1208.6109.pdf'] | 2 |
64,823,261 | <p>Let me start simple; since you have square matrices for both input and filter let me get one dimension. Then you can apply the same for other dimension(s). Imagine your are building fences between trees, if there are N trees, you have to build N-1 fences. Now apply that analogy to convolution layers.</p>
<p>Your output size will be: input size - filter size + 1</p>
<p><em>Because your filter can only have n-1 steps as fences I mentioned.</em></p>
<p>Let's calculate your output with that idea.
128 - 5 + 1 = 124
Same for other dimension too. So now you have a 124 x 124 image.</p>
<p>That is for one filter.</p>
<p>If you apply this 40 times you will have another dimension: 124 x 124 x 40</p>
<p>Here is a great guide if you want to know more about advanced convolution arithmetic: <a href="https://arxiv.org/pdf/1603.07285.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1603.07285.pdf</a></p> | 2020-11-13 15:07:20.420000+00:00 | 2020-11-13 15:07:20.420000+00:00 | null | null | 53,580,088 | <p>How do I calculate the output size in a convolution layer?</p>
<p>For example, I have a 2D convolution layer that takes a 3x128x128 input and has 40 filters of size 5x5.</p> | 2018-12-02 12:09:28.557000+00:00 | 2022-03-29 06:59:44.023000+00:00 | 2022-03-29 06:59:44.023000+00:00 | machine-learning|deep-learning|pytorch|conv-neural-network | ['https://arxiv.org/pdf/1603.07285.pdf'] | 1 |
4,398,739 | <p>Please the following paper</p>
<p><a href="http://arxiv.org/abs/1008.1459" rel="nofollow">Actor Model of Computation</a></p> | 2010-12-09 13:33:57.117000+00:00 | 2010-12-09 13:33:57.117000+00:00 | null | null | 2,524,499 | <p>I read a chapter in a book (Seven languages in Seven Weeks by Bruce A. Tate) about Matz (Inventor of Ruby) saying that 'I would remove the thread and add actors, or some other more advanced concurrency features'. </p>
<ul>
<li>Why and how an actor model can be an advanced concurrency model that replaces the threading?</li>
<li>What other models are the 'advanced concurrency model'?</li>
</ul> | 2010-03-26 15:40:10.100000+00:00 | 2010-12-09 13:41:00.357000+00:00 | null | ruby|multithreading|concurrency | ['http://arxiv.org/abs/1008.1459'] | 1 |
4,398,807 | <p>Also please see</p>
<p><a href="http://arxiv.org/abs/1008.2748" rel="nofollow">ActorScript(TM) extension of C#(TM), Java(TM), and Objective C(TM): iAdaptive(TM) concurrency for antiCloud(TM) privacy and securitY</a></p> | 2010-12-09 13:41:00.357000+00:00 | 2010-12-09 13:41:00.357000+00:00 | null | null | 2,524,499 | <p>I read a chapter in a book (Seven languages in Seven Weeks by Bruce A. Tate) about Matz (Inventor of Ruby) saying that 'I would remove the thread and add actors, or some other more advanced concurrency features'. </p>
<ul>
<li>Why and how an actor model can be an advanced concurrency model that replaces the threading?</li>
<li>What other models are the 'advanced concurrency model'?</li>
</ul> | 2010-03-26 15:40:10.100000+00:00 | 2010-12-09 13:41:00.357000+00:00 | null | ruby|multithreading|concurrency | ['http://arxiv.org/abs/1008.2748'] | 1 |
2,257,800 | <p>This greedy algorithm produces fairly short minimal sequences.</p>
<blockquote>
<p><strong>UPDATE: Note that <a href="https://arxiv.org/abs/1408.5108" rel="nofollow noreferrer">for <em>n</em> ≥ 6, this algorithm does not produce the shortest possible string!</a></strong></p>
</blockquote>
<ul>
<li>Make a collection of all permutations.</li>
<li>Remove the first permutation from the collection.</li>
<li>Let <em>a</em> = the first permutation.</li>
<li>Find the sequence in the collection that has the greatest overlap with the end of <em>a</em>. If there is a tie, choose the sequence is first in lexicographic order. Remove the chosen sequence from the collection and add the non-overlapping part to the end of <em>a</em>. Repeat this step until the collection is empty.</li>
</ul>
<p>The curious tie-breaking step is necessary for correctness; breaking the tie at random instead seems to result in longer strings.</p>
<p>I verified (by writing a much longer, slower program) that the answer this algorithm gives for length 4, 123412314231243121342132413214321, is indeed the shortest answer. However, for length 6 it produces an answer of length 873, which is longer than the shortest known solution.</p>
<p>The algorithm is O(<em>n</em>!<sup>2</sup>).</p>
<p>An implementation in Python:</p>
<pre><code>import itertools
def costToAdd(a, b):
for i in range(1, len(b)):
if a.endswith(b[:-i]):
return i
return len(b)
def stringContainingAllPermutationsOf(s):
perms = set(''.join(tpl) for tpl in itertools.permutations(s))
perms.remove(s)
a = s
while perms:
cost, next = min((costToAdd(a, x), x) for x in perms)
perms.remove(next)
a += next[-cost:]
return a
</code></pre>
<p>The length of the strings generated by this function are 1, 3, 9, 33, 153, 873, 5913, ... which appears to be <a href="https://oeis.org/A007489" rel="nofollow noreferrer">this integer sequence</a>.</p>
<p>I have a hunch you can do better than O(<em>n</em>!<sup>2</sup>).</p> | 2010-02-13 14:09:49.963000+00:00 | 2018-10-24 18:48:24.670000+00:00 | 2018-10-24 18:48:24.670000+00:00 | null | 2,253,232 | <p>How can I generate the shortest sequence with contains all possible permutations?</p>
<p>Example:
For length 2 the answer is 121, because this list contains 12 and 21, which are all possible permutations.</p>
<p>For length 3 the answer is 123121321, because this list contains all possible permutations:
123, 231, 312, 121 (invalid), 213, 132, 321.</p>
<p>Each number (within a given permutation) may only occur once.</p> | 2010-02-12 16:17:57.630000+00:00 | 2020-09-24 14:25:25.917000+00:00 | 2019-08-16 09:12:22.930000+00:00 | algorithm|sequence|superpermutation | ['https://arxiv.org/abs/1408.5108', 'https://oeis.org/A007489'] | 2 |
30,866,285 | <p>There are a lot of ways to answer this question. The answer depends on your interpretation of phrases and sentences.</p>
<p>These distributional models such as <code>word2vec</code> which provide vector representation for each word can only show how a word usually is used in a window-base context in relation with other words. Based on this interpretation of context-word relations, you can take average vector of all words in a sentence as vector representation of the sentence. For example, in this sentence:</p>
<blockquote>
<p>vegetarians eat vegetables .</p>
</blockquote>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=v_s%20%3D%20v(%60%60vegetarians%22)%20%2B%20v(%60%60eat%22)%20%2B%20v(%60%60vegrables%22)" alt="V_s"></p>
<p>We can take the normalised vector as vector representation:</p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=v(%60%60vegetarians%5C%20eat%5C%20vegrables%22)%20%3D%20%5Cfrac%7Bv_s%7D%7B%7C%7Cv_s%7C%7C%7D" alt="V(sentence)"></p>
<p>The problem is in compositional nature of sentences. If you take the average word vectors as above, these two sentences have the same vector representation:</p>
<blockquote>
<p>vegetables eat vegetarians .</p>
</blockquote>
<p>There are a lot of researches in distributional fashion to learn tree structures through corpus processing. For example: <a href="http://www.socher.org/index.php/Main/ParsingWithCompositionalVectorGrammars">Parsing With Compositional Vector Grammars</a>. This <a href="https://www.youtube.com/watch?v=NJozqoejJnA">video</a> also explain this method.</p>
<p>Again I want to emphasise on interpretation. These sentence vectors probably have their own meanings in your application. For instance, in sentiment analysis in <a href="http://nlp.stanford.edu/sentiment/">this project in Stanford</a>, the meaning that they are seeking is the positive/negative sentiment of a sentence. Even if you find a perfect vector representation for a sentence, there are philosophical debates that these are not actual meanings of sentences if you cannot judge the truth condition (David Lewis "General Semantics" 1970). That's why there are lines of works focusing on computer vision (<a href="http://arxiv.org/pdf/1501.02598.pdf">this paper</a> or <a href="http://nlp.stanford.edu/~socherr/SocherKarpathyLeManningNg_TACL2013.pdf">this paper</a>). My point is that it can completely depend on your application and interpretation of vectors.</p> | 2015-06-16 11:28:54.897000+00:00 | 2015-07-03 14:10:55.703000+00:00 | 2015-07-03 14:10:55.703000+00:00 | null | 30,795,944 | <p>We have models for converting words to vectors (for example the word2vec model). Do similar models exist which convert sentences/documents into vectors, using perhaps the vectors learnt for the individual words?</p> | 2015-06-12 05:36:56.443000+00:00 | 2018-07-28 05:49:12.400000+00:00 | null | vector|nlp|word2vec | ['http://www.socher.org/index.php/Main/ParsingWithCompositionalVectorGrammars', 'https://www.youtube.com/watch?v=NJozqoejJnA', 'http://nlp.stanford.edu/sentiment/', 'http://arxiv.org/pdf/1501.02598.pdf', 'http://nlp.stanford.edu/~socherr/SocherKarpathyLeManningNg_TACL2013.pdf'] | 5 |
30,977,313 | <p>1) Skip gram method: <a href="https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCMQFjAA&url=http%3A%2F%2Fpapers.nips.cc%2Fpaper%2F5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf&ei=FdyHVeeNDMqhugSShZugAg&usg=AFQjCNH4-Ecded1JdFimktgajA3mvcdXaQ&sig2=5hiJmQ0XYxBX639t-gJ2jw&bvm=bv.96339352,d.c2E">paper here</a> and the tool that uses it, <a href="https://code.google.com/p/word2vec/">google word2vec</a></p>
<p>2) Using <a href="http://arxiv.org/abs/1502.06922">LSTM-RNN</a> to form semantic representations of sentences.</p>
<p>3) Representations of <a href="https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCIQFjAA&url=http%3A%2F%2Fcs.stanford.edu%2F~quocle%2Fparagraph_vector.pdf&ei=Ct2HVa-9GcOMuAS3-oDoDA&usg=AFQjCNESECVF_9eXAkAjfSqqHrqlxkVQgg&sig2=ozIjjEKK9rqrn4T0wabTQw&bvm=bv.96339352,d.c2E">sentences and documents</a>. The Paragraph vector is introduced in this paper. It is basically an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents.</p>
<p>4) Though this <a href="http://arxiv.org/abs/1503.00075">paper</a> does not form sentence/paragraph vectors, it is simple enough to do that. One can just plug in the individual word vectors(<a href="http://nlp.stanford.edu/pubs/glove.pdf">Glove</a> <a href="http://nlp.stanford.edu/projects/glove/">word vectors</a> are found to give the best performance) and then can form a vector representation of the whole sentence/paragraph. </p>
<p>5) Using a <a href="http://arxiv.org/pdf/1406.3830.pdf">CNN</a> to <a href="http://arxiv.org/abs/1502.01710">summarize</a> documents.</p> | 2015-06-22 10:12:18.013000+00:00 | 2015-06-22 10:12:18.013000+00:00 | null | null | 30,795,944 | <p>We have models for converting words to vectors (for example the word2vec model). Do similar models exist which convert sentences/documents into vectors, using perhaps the vectors learnt for the individual words?</p> | 2015-06-12 05:36:56.443000+00:00 | 2018-07-28 05:49:12.400000+00:00 | null | vector|nlp|word2vec | ['https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCMQFjAA&url=http%3A%2F%2Fpapers.nips.cc%2Fpaper%2F5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf&ei=FdyHVeeNDMqhugSShZugAg&usg=AFQjCNH4-Ecded1JdFimktgajA3mvcdXaQ&sig2=5hiJmQ0XYxBX639t-gJ2jw&bvm=bv.96339352,d.c2E', 'https://code.google.com/p/word2vec/', 'http://arxiv.org/abs/1502.06922', 'https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCIQFjAA&url=http%3A%2F%2Fcs.stanford.edu%2F~quocle%2Fparagraph_vector.pdf&ei=Ct2HVa-9GcOMuAS3-oDoDA&usg=AFQjCNESECVF_9eXAkAjfSqqHrqlxkVQgg&sig2=ozIjjEKK9rqrn4T0wabTQw&bvm=bv.96339352,d.c2E', 'http://arxiv.org/abs/1503.00075', 'http://nlp.stanford.edu/pubs/glove.pdf', 'http://nlp.stanford.edu/projects/glove/', 'http://arxiv.org/pdf/1406.3830.pdf', 'http://arxiv.org/abs/1502.01710'] | 9 |
31,181,672 | <p>A solution that is slightly less off the shelf, but probably hard to beat in terms of accuracy if you have a specific thing you're trying to do:</p>
<p>Build an RNN (with LSTM or GRU memory cells, <a href="http://arxiv.org/pdf/1412.3555.pdf" rel="noreferrer">comparison here</a>) and optimize the error function of the actual task you're trying to accomplish. You feed it your sentence, and train it to produce the output you want. The activations of the network after being fed your sentence is a representation of the sentence (although you might only care about the networks output). </p>
<p>You can represent the sentence as a sequence of one-hot encoded characters, as a sequence of one-hot encoded words, or as a sequence of word vectors (e.g. <a href="http://nlp.stanford.edu/projects/glove/" rel="noreferrer">GloVe</a> or <a href="https://code.google.com/p/word2vec/" rel="noreferrer">word2vec</a>). If you use word vectors, you can keep backpropagating into the word vectors, updating their weights, so you also get custom word vectors tweaked specifically for the task you're doing. </p> | 2015-07-02 10:07:55.037000+00:00 | 2015-07-02 15:14:40.503000+00:00 | 2015-07-02 15:14:40.503000+00:00 | null | 30,795,944 | <p>We have models for converting words to vectors (for example the word2vec model). Do similar models exist which convert sentences/documents into vectors, using perhaps the vectors learnt for the individual words?</p> | 2015-06-12 05:36:56.443000+00:00 | 2018-07-28 05:49:12.400000+00:00 | null | vector|nlp|word2vec | ['http://arxiv.org/pdf/1412.3555.pdf', 'http://nlp.stanford.edu/projects/glove/', 'https://code.google.com/p/word2vec/'] | 3 |
18,577,527 | <p>Performance. That's why <a href="http://en.wikipedia.org/wiki/NumPy" rel="nofollow">NumPy</a> is so fast (<a href="http://arxiv.org/pdf/1102.1523.pdf" rel="nofollow">"The NumPy array: a structure for efficient
numerical computation"</a>)</p> | 2013-09-02 16:35:15.847000+00:00 | 2013-09-02 16:35:15.847000+00:00 | null | null | 18,577,413 | <p>Recently I've read that we can code C/C++ and from python call those modules, I know that C/C++ is fast and strongly typed and those things but what advantages I got if I code some module and then call it from python? in what case/scenario/context it would be nice to implement this?</p>
<p>Thanks in advance.</p> | 2013-09-02 16:27:35.813000+00:00 | 2013-09-02 16:59:13.643000+00:00 | 2013-09-02 16:32:52.120000+00:00 | c++|python|c | ['http://en.wikipedia.org/wiki/NumPy', 'http://arxiv.org/pdf/1102.1523.pdf'] | 2 |
44,884,590 | <p>It sounds like you're hoping to be able to find a generic term for the words in the cluster – sort of a <em>hypernym</em> – with an automated process, and were hoping that the centroid would be that term. </p>
<p>Unfortunately, I've not seen any claims word2vec winds up arranging words that way. Words do tend to be close to other words that could fill-in for them – but there really aren't any guarantees all words of shared type are closer to each other than other types of words, or that the hyponyms tend to be be equidistant to their hyponyms, or so on. (It's certainly possible given the success of word2vec in analogy-solving that hypernyms tend to be offset from their hyponyms in a vaguely similar direction across classes. That is, <em>perhaps</em> vaguely <code>'volkswagen' + ('animal' - 'dog') ~ 'car'</code> – though I haven't checked.)</p>
<p>There's an interesting observation sometimes made about word-vectors that could be relevant: word-vectors for words with more diffuse meaning – such as multiple senses – often tend to have lower magnitudes, in their raw form, than other word-vectors for words with more singular meanings. The usual most-similar calculations ignore the magnitudes, just comparing the raw directions, but a search for more-generic terms might want to favor lower-magnitude vectors. But this is also just a guess I haven't checked. </p>
<p>You could look up work on automated hypernym/hyponym discovery, and it's possible word2vec vectors could be a contributing factor to such discovery processes – either trained in the normal way, or with some new wrinkles to try to force the desired arrangement. (But, such specializations aren't generally supported by gensim out-of-the-box.)</p>
<p>There are often papers that refine the word2vec training process to make the vectors better for particular purposes. One recent paper from Facebook Research that seems relevant is "<a href="https://arxiv.org/abs/1705.08039" rel="nofollow noreferrer">Poincaré Embeddings for Learning Hierarchical Representations</a>" – which reports better modeling of hierarchies and specifically tests on the noun hypernym graph of WordNet.</p> | 2017-07-03 11:38:53.190000+00:00 | 2017-07-03 11:38:53.190000+00:00 | null | null | 44,871,728 | <p>I used the gensim package in Python to load the pre-trained Google word2vec dataset. I then want to use k-means to find meaningful clusters on my word vectors, and find the representative word for each cluster. I am thinking to use the word whose corresponding vector is closest to the centroid of a cluster to represent that cluster, but don't know whether this is a good idea as my experiment did not give me good results.</p>
<p>My example code is like below:</p>
<pre><code>import gensim
import numpy as np
import pandas as pd
from sklearn.cluster import MiniBatchKMeans
from sklearn.metrics import pairwise_distances_argmin_min
model = gensim.models.KeyedVectors.load_word2vec_format('/home/Desktop/GoogleNews-vectors-negative300.bin', binary=True)
K=3
words = ["ship", "car", "truck", "bus", "vehicle", "bike", "tractor", "boat",
"apple", "banana", "fruit", "pear", "orange", "pineapple", "watermelon",
"dog", "pig", "animal", "cat", "monkey", "snake", "tiger", "rat", "duck", "rabbit", "fox"]
NumOfWords = len(words)
# construct the n-dimentional array for input data, each row is a word vector
x = np.zeros((NumOfWords, model.vector_size))
for i in range(0, NumOfWords):
x[i,]=model[words[i]]
# train the k-means model
classifier = MiniBatchKMeans(n_clusters=K, random_state=1, max_iter=100)
classifier.fit(x)
# check whether the words are clustered correctly
print(classifier.predict(x))
# find the index and the distance of the closest points from x to each class centroid
close = pairwise_distances_argmin_min(classifier.cluster_centers_, x, metric='euclidean')
index_closest_points = close[0]
distance_closest_points = close[1]
for i in range(0, K):
print("The closest word to the centroid of class {0} is {1}, the distance is {2}".format(i, words[index_closest_points[i]], distance_closest_points[i]))
</code></pre>
<p>The output is as below:</p>
<pre><code>[2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0]
The closest word to the centroid of class 0 is rabbit, the distance is 1.578625818679259
The closest word to the centroid of class 1 is fruit, the distance is 1.8351978219013796
The closest word to the centroid of class 2 is car, the distance is 1.6586030662247868
</code></pre>
<p>In the code I have 3 categories of words: vehicle, fruit and animal. From the output we can see that k-means correctly clustered the words for all 3 categories, but the representative words derived using the centroid method are not very good, as for class 0 I want to see "animal" but it gives "rabbit", and for class 2 I want to see "vehicle" but it returns "car".</p>
<p>Any help or suggestion in finding the good representative word for each cluster will be highly appreciated.</p> | 2017-07-02 14:14:36.283000+00:00 | 2017-07-03 11:38:53.190000+00:00 | 2017-07-02 14:21:18.697000+00:00 | python|k-means|gensim|word2vec | ['https://arxiv.org/abs/1705.08039'] | 1 |
72,468,620 | <p>Generally, input scale matters. Changing to grayscale matters for sure. Details depend on the training data. That is, if the training data contains the object with the same scale you use, it might not make a big difference, if not it makes a difference. Deep learning is mostly not invariant to any changes in the data. CNNs show some invariance to translation, but that is about it. Rotation, scaling, color distortion, brightness etc. all impact performance negatively - if these conditions have not been part of the training.</p>
<p>The paper <a href="https://arxiv.org/abs/2106.06057" rel="nofollow noreferrer">https://arxiv.org/abs/2106.06057</a> published at IJCNN 2022 investigates a classifier on rotated and scaled images on simple datasets like MNIST (digits) and show that performance deteriorates a lot. There are also other papers that showed the same thing.</p> | 2022-06-01 22:15:18.560000+00:00 | 2022-06-01 22:15:18.560000+00:00 | null | null | 55,487,087 | <p>As the title says, I want to know whether input shape affects the accuracy of the deep learning model.</p>
<p>Also, can pre-trained models (like Xception) be used on grayscale images?</p>
<p>P.S. : I recently started learning deep learning so if possible please explain in simple terms.</p> | 2019-04-03 04:28:35.103000+00:00 | 2022-06-01 22:15:18.560000+00:00 | null | python|keras|conv-neural-network|pre-trained-model|transfer-learning | ['https://arxiv.org/abs/2106.06057'] | 1 |
55,286,664 | <p>Another possible way to get better performance would be to reduce the model as much as possible.</p>
<p>One of the most promising techniques is quantized and binarized neural networks. Here are some references:</p>
<ol>
<li><a href="https://arxiv.org/abs/1603.05279" rel="nofollow noreferrer">https://arxiv.org/abs/1603.05279</a></li>
<li><a href="https://arxiv.org/abs/1602.02505" rel="nofollow noreferrer">https://arxiv.org/abs/1602.02505</a></li>
</ol> | 2019-03-21 18:05:23.147000+00:00 | 2019-03-21 18:05:23.147000+00:00 | null | null | 55,253,708 | <p>I have to productionize a PyTorch BERT Question Answer model. The CPU inference is very slow for me as for every query the model needs to evaluate 30 samples. Out of the result of these 30 samples, I pick the answer with the maximum score. GPU would be too costly for me to use for inference.</p>
<p>Can I leverage multiprocessing / parallel CPU inference for this?
If Yes, what is the best practice to do so?
If No, is there a cloud option that bills me only for the GPU queries I make and not for continuously running the GPU instance?</p> | 2019-03-20 04:42:26.587000+00:00 | 2022-05-26 16:50:23.733000+00:00 | null | python|multiprocessing|pytorch | ['https://arxiv.org/abs/1603.05279', 'https://arxiv.org/abs/1602.02505'] | 2 |
62,069,571 | <p>For the benefit of community here i am explaining, how to use <code>image_generator</code> in Tensorflow with input_shape <code>(100, 100, 3)</code> using <code>dogs vs cats</code> dataset </p>
<p>If we haven't choose right batch size there is a chance of model struck right after first epoch, hence i am starting my explanation with <code>how to choose batch_size ?</code> </p>
<p>We generally observe that <code>batch size</code> to be the <code>power of 2</code>, this is because of the effective work of optimized matrix operation libraries. This is further elaborated in <a href="https://arxiv.org/abs/1303.2314" rel="nofollow noreferrer">this</a> research paper. </p>
<p>Check out <a href="https://mydeeplearningnb.wordpress.com/2019/02/23/convnet-for-classification-of-cifar-10/" rel="nofollow noreferrer">this</a> blog which describes how to choose the right <code>batch size</code> while comparing the effects of different batch sizes on the <code>accuracy</code> of CIFAR-10 dataset.</p>
<p>Here is the end to end working code with outputs</p>
<pre><code>import os
import numpy as np
from keras import layers
import pandas as pd
from tensorflow.keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from tensorflow.keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from tensorflow.keras.models import Sequential
from tensorflow.keras import regularizers, optimizers
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import keras.backend as K
K.set_image_data_format('channels_last')
train_dir = '/content/drive/My Drive/Dogs_Vs_Cats/train'
test_dir = '/content/drive/My Drive/Dogs_Vs_Cats/test'
img_width, img_height = 100, 100
input_shape = img_width, img_height, 3
train_samples = 2000
test_samples = 1000
epochs = 30
batch_size = 32
train_datagen = ImageDataGenerator(
rescale = 1. /255,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
test_datagen = ImageDataGenerator(
rescale = 1. /255)
train_data = train_datagen.flow_from_directory(
train_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary')
test_data = test_datagen.flow_from_directory(
test_dir,
target_size = (img_width, img_height),
batch_size = batch_size,
class_mode = 'binary')
model = Sequential()
model.add(Conv2D(32, (7, 7), strides = (1, 1), input_shape = input_shape))
model.add(BatchNormalization(axis = 3))
model.add(Activation('relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (7, 7), strides = (1, 1)))
model.add(BatchNormalization(axis = 3))
model.add(Activation('relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss = 'binary_crossentropy',
optimizer = 'rmsprop',
metrics = ['accuracy'])
model.fit_generator(
train_data,
steps_per_epoch = train_samples//batch_size,
epochs = epochs,
validation_data = test_data,
verbose = 1,
validation_steps = test_samples//batch_size)
</code></pre>
<p>Output:</p>
<pre><code>Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Model: "sequential_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_8 (Conv2D) (None, 94, 94, 32) 4736
_________________________________________________________________
batch_normalization_8 (Batch (None, 94, 94, 32) 128
_________________________________________________________________
activation_8 (Activation) (None, 94, 94, 32) 0
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 47, 47, 32) 0
_________________________________________________________________
conv2d_9 (Conv2D) (None, 41, 41, 64) 100416
_________________________________________________________________
batch_normalization_9 (Batch (None, 41, 41, 64) 256
_________________________________________________________________
activation_9 (Activation) (None, 41, 41, 64) 0
_________________________________________________________________
max_pooling2d_9 (MaxPooling2 (None, 20, 20, 64) 0
_________________________________________________________________
flatten_4 (Flatten) (None, 25600) 0
_________________________________________________________________
dense_11 (Dense) (None, 64) 1638464
_________________________________________________________________
dropout_4 (Dropout) (None, 64) 0
_________________________________________________________________
dense_12 (Dense) (None, 1) 65
=================================================================
Total params: 1,744,065
Trainable params: 1,743,873
Non-trainable params: 192
_________________________________________________________________
Epoch 1/30
62/62 [==============================] - 14s 225ms/step - loss: 1.8307 - accuracy: 0.4853 - val_loss: 0.6931 - val_accuracy: 0.5000
Epoch 2/30
62/62 [==============================] - 14s 226ms/step - loss: 0.7085 - accuracy: 0.4832 - val_loss: 0.6931 - val_accuracy: 0.5010
Epoch 3/30
62/62 [==============================] - 14s 218ms/step - loss: 0.6955 - accuracy: 0.5300 - val_loss: 0.6894 - val_accuracy: 0.5292
Epoch 4/30
62/62 [==============================] - 14s 221ms/step - loss: 0.6938 - accuracy: 0.5407 - val_loss: 0.7309 - val_accuracy: 0.5262
Epoch 5/30
62/62 [==============================] - 14s 218ms/step - loss: 0.6860 - accuracy: 0.5498 - val_loss: 0.6776 - val_accuracy: 0.5665
Epoch 6/30
62/62 [==============================] - 13s 216ms/step - loss: 0.7027 - accuracy: 0.5407 - val_loss: 0.6895 - val_accuracy: 0.5101
Epoch 7/30
62/62 [==============================] - 13s 216ms/step - loss: 0.6852 - accuracy: 0.5528 - val_loss: 0.6567 - val_accuracy: 0.5887
Epoch 8/30
62/62 [==============================] - 13s 217ms/step - loss: 0.6772 - accuracy: 0.5427 - val_loss: 0.6643 - val_accuracy: 0.5847
Epoch 9/30
62/62 [==============================] - 13s 217ms/step - loss: 0.6709 - accuracy: 0.5534 - val_loss: 0.6623 - val_accuracy: 0.5887
Epoch 10/30
62/62 [==============================] - 14s 219ms/step - loss: 0.6579 - accuracy: 0.5711 - val_loss: 0.6614 - val_accuracy: 0.6058
Epoch 11/30
62/62 [==============================] - 13s 218ms/step - loss: 0.6591 - accuracy: 0.5625 - val_loss: 0.6594 - val_accuracy: 0.5454
Epoch 12/30
62/62 [==============================] - 13s 216ms/step - loss: 0.6419 - accuracy: 0.5767 - val_loss: 1.1041 - val_accuracy: 0.5161
Epoch 13/30
62/62 [==============================] - 13s 215ms/step - loss: 0.6479 - accuracy: 0.5783 - val_loss: 0.6441 - val_accuracy: 0.5837
Epoch 14/30
62/62 [==============================] - 13s 216ms/step - loss: 0.6373 - accuracy: 0.5899 - val_loss: 0.6427 - val_accuracy: 0.6310
Epoch 15/30
62/62 [==============================] - 13s 215ms/step - loss: 0.6203 - accuracy: 0.6133 - val_loss: 0.7390 - val_accuracy: 0.6220
Epoch 16/30
62/62 [==============================] - 13s 217ms/step - loss: 0.6277 - accuracy: 0.6362 - val_loss: 0.6649 - val_accuracy: 0.5786
Epoch 17/30
62/62 [==============================] - 13s 215ms/step - loss: 0.6155 - accuracy: 0.6316 - val_loss: 0.9823 - val_accuracy: 0.5484
Epoch 18/30
62/62 [==============================] - 14s 222ms/step - loss: 0.6056 - accuracy: 0.6408 - val_loss: 0.6333 - val_accuracy: 0.6048
Epoch 19/30
62/62 [==============================] - 14s 218ms/step - loss: 0.6025 - accuracy: 0.6529 - val_loss: 0.6514 - val_accuracy: 0.6442
Epoch 20/30
62/62 [==============================] - 13s 215ms/step - loss: 0.6149 - accuracy: 0.6423 - val_loss: 0.6373 - val_accuracy: 0.6048
Epoch 21/30
62/62 [==============================] - 13s 215ms/step - loss: 0.6030 - accuracy: 0.6519 - val_loss: 0.6086 - val_accuracy: 0.6573
Epoch 22/30
62/62 [==============================] - 13s 217ms/step - loss: 0.5936 - accuracy: 0.6865 - val_loss: 1.0677 - val_accuracy: 0.5605
Epoch 23/30
62/62 [==============================] - 13s 214ms/step - loss: 0.5964 - accuracy: 0.6728 - val_loss: 0.7927 - val_accuracy: 0.5877
Epoch 24/30
62/62 [==============================] - 13s 215ms/step - loss: 0.5866 - accuracy: 0.6707 - val_loss: 0.6116 - val_accuracy: 0.6421
Epoch 25/30
62/62 [==============================] - 13s 214ms/step - loss: 0.5933 - accuracy: 0.6662 - val_loss: 0.8282 - val_accuracy: 0.6048
Epoch 26/30
62/62 [==============================] - 13s 214ms/step - loss: 0.5705 - accuracy: 0.6885 - val_loss: 0.5806 - val_accuracy: 0.6966
Epoch 27/30
62/62 [==============================] - 14s 218ms/step - loss: 0.5709 - accuracy: 0.7017 - val_loss: 1.2404 - val_accuracy: 0.5333
Epoch 28/30
62/62 [==============================] - 13s 216ms/step - loss: 0.5691 - accuracy: 0.7104 - val_loss: 0.6136 - val_accuracy: 0.6442
Epoch 29/30
62/62 [==============================] - 13s 215ms/step - loss: 0.5627 - accuracy: 0.7048 - val_loss: 0.6936 - val_accuracy: 0.6613
Epoch 30/30
62/62 [==============================] - 13s 214ms/step - loss: 0.5714 - accuracy: 0.6941 - val_loss: 0.5872 - val_accuracy: 0.6825
</code></pre> | 2020-05-28 16:19:29.337000+00:00 | 2020-05-28 16:19:29.337000+00:00 | null | null | 61,215,270 | <p>I'm trying to use this approach in Tensorflow 2.X to load large dataset that does not fit in memory.</p>
<p>I have a folder with X sub-folders that contains images. Each sub-folder is a class.</p>
<pre><code>\dataset
-\class1
-img1_1.jpg
-img1_2.jpg
-...
-\classe2
-img2_1.jpg
-img2_2.jpg
-...
</code></pre>
<p>I create my data generator from my folder like this:</p>
<pre><code>train_data_gen = image_generator.flow_from_directory(directory="path\\to\\dataset",
batch_size=100,
shuffle=True,
target_size=(100, 100), # Image H x W
classes=list(CLASS_NAMES)) # list of folder/class names ["class1", "class2", ...., "classX"]
</code></pre>
<blockquote>
<p>Found 629 images belonging to 2 classes.</p>
</blockquote>
<p>I've did a smaller dataset to test the pipeline. Only 629 images in 2 classes.
Now I can create a dummy model like this:</p>
<pre><code>model = tf.keras.Sequential()
model.add(Dense(1, activation=activation, input_shape=(100, 100, 3))) # only 1 layer of 1 neuron
model.add(Dense(2)) # 2classes
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=['categorical_accuracy'])
</code></pre>
<p>Once compile I try to fit this dummy model:</p>
<pre><code>STEPS_PER_EPOCH = np.ceil(image_count / batch_size) # 629 / 100
model.fit_generator(generator=train_data_gen , steps_per_epoch=STEPS_PER_EPOCH, epochs=2, verbose=1)
1/7 [===>..........................] - ETA: 2s - loss: 1.1921e-07 - categorical_accuracy: 0.9948
2/7 [=======>......................] - ETA: 1s - loss: 1.1921e-07 - categorical_accuracy: 0.5124
3/7 [===========>..................] - ETA: 0s - loss: 1.1921e-07 - categorical_accuracy: 0.3449
4/7 [================>.............] - ETA: 0s - loss: 1.1921e-07 - categorical_accuracy: 0.2662
5/7 [====================>.........] - ETA: 0s - loss: 1.1921e-07 - categorical_accuracy: 0.2130
6/7 [========================>.....] - ETA: 0s - loss: 1.1921e-07 - categorical_accuracy: 0.1808
</code></pre>
<blockquote>
<p>2020-04-14 20:39:48.629203: W tensorflow/core/framework/op_kernel.cc:1610] Invalid argument: ValueError: <code>generator</code> yielded an element of shape (29, 100, 100, 3) where an element of shape (100, 100, 100, 3) was expected.</p>
</blockquote>
<p>From what i understand, the last batch doesn't has the same shape has the previous batches. So it crashes. I've tried to specify a <code>batch_input_shape</code>.</p>
<pre><code>model.add(Dense(1, activation=activation, batch_input_shape=(None, 100, 100, 3)))
</code></pre>
<p>I've found <a href="https://stackoverflow.com/a/52126464/5462743">here</a> that I should put <code>None</code> to not specify the number of elements in the batch so it can be dynamic. But no success.</p>
<p>Edit: From the comment I had 2 mistakes:</p>
<ul>
<li>The output shape was bad. I missed the flatten layer in the model.</li>
<li>The previous link does work with the correction of the flatten layer</li>
<li>Missing some code, I actually feed the <code>fit_generator</code> with a <code>tf.data.Dataset.from_generator</code> but I gave here a <code>image_generator.flow_from_directory</code>.</li>
</ul>
<p>Here is the final code:</p>
<pre><code>train_data_gen = image_generator.flow_from_directory(directory="path\\to\\dataset",
batch_size=1000,
shuffle=True,
target_size=(100, 100),
classes=list(CLASS_NAMES))
train_dataset = tf.data.Dataset.from_generator(
lambda: train_data_gen,
output_types=(tf.float32, tf.float32),
output_shapes=([None, x, y, 3],
[None, len(CLASS_NAMES)]))
model = tf.keras.Sequential()
model.add(Flatten(batch_input_shape=(None, 100, 100, 3)))
model.add(Dense(1, activation=activation))
model.add(Dense(2))
model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=['categorical_accuracy'])
STEPS_PER_EPOCH = np.ceil(image_count / batch_size) # 629 / 100
model.fit_generator(generator=train_data_gen , steps_per_epoch=STEPS_PER_EPOCH, epochs=2, verbose=1)
</code></pre> | 2020-04-14 19:03:54.520000+00:00 | 2020-05-28 16:19:29.337000+00:00 | 2020-04-15 09:01:10.610000+00:00 | tensorflow2.0|tensorflow-datasets | ['https://arxiv.org/abs/1303.2314', 'https://mydeeplearningnb.wordpress.com/2019/02/23/convnet-for-classification-of-cifar-10/'] | 2 |
39,960,750 | <p>This is a hugely interesting, and very hard, problem area. It will probably take you months to read enough to even understand how to attack the problem. Here's a few things that might help you get started, and they are more to show the problems you will face than to provide solutions:</p>
<p><a href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/" rel="nofollow">http://karpathy.github.io/2015/05/21/rnn-effectiveness/</a></p>
<p>Then read this, and related papers:</p>
<p><a href="https://arxiv.org/pdf/1410.5401v2.pdf" rel="nofollow">https://arxiv.org/pdf/1410.5401v2.pdf</a></p>
<p>Next, you probably want to read the classic papers in program synthesis and generation at the parse tree/AST level (mostly out of MIT, I think, in the early 90s.)</p>
<p>Best of luck. This is not trivial.</p> | 2016-10-10 14:36:07.963000+00:00 | 2016-10-10 14:36:07.963000+00:00 | null | null | 39,960,401 | <p>This might not be the right place for this to ask, but I am interested in artificial neural networks and want to learn more.</p>
<p>How do you design a network and train it on source code so it can come up with programs for, for example, easy number theory problems?</p>
<p>What's the general name of this research field?</p> | 2016-10-10 14:16:42.507000+00:00 | 2016-10-10 14:36:07.963000+00:00 | 2016-10-10 14:21:33.970000+00:00 | machine-learning|neural-network|tensorflow|artificial-intelligence | ['http://karpathy.github.io/2015/05/21/rnn-effectiveness/', 'https://arxiv.org/pdf/1410.5401v2.pdf'] | 2 |
51,716,817 | <p>Thought I might point out that, arbitrarily making the batch size large (when you have large amounts of memory) can be bad sometimes in terms of the generalization of your model.</p>
<p>Reference: </p>
<p><a href="https://arxiv.org/pdf/1705.08741.pdf" rel="nofollow noreferrer">Train longer, generalize better</a></p>
<p><a href="https://arxiv.org/abs/1609.04836" rel="nofollow noreferrer">On Large-Batch Training for Deep Learning</a>. </p> | 2018-08-06 23:18:50.423000+00:00 | 2018-08-06 23:18:50.423000+00:00 | null | null | 51,708,210 | <p>I am using TensorFlow 1.9, on an NVIDIA GPU with 3 GB of memory. The size of my minibatch is 100 MB. Therefore, I could potentially fit multiple minibatches on my GPU at the same time. So my question is about whether this is possible and whether it is standard practice.</p>
<p>For example, when I train my TensorFlow model, I run something like this on every epoch:</p>
<pre><code>loss_sum = 0
for batch_num in range(num_batches):
batch_inputs = get_batch_inputs()
batch_labels = get_batch_labels()
batch_loss, _ = sess.run([loss_op, train_op], feed_dict={inputs: batch_inputs, labels: batch_labels})
loss_sum += batch_loss
loss = batch_loss / num_batches
</code></pre>
<p>This iterates over my minibatches and performs one weight update per minibatch. But the size of <code>image_data</code> and <code>label_data</code> is only 100 MB, so the majority of the GPU is not being used.</p>
<p>One option would be to just increase the minibatch size so that the minibatch is closer to the 3 GB GPU capacity. However, I want to keep the same small minibatch size to help with optimisation.</p>
<p>So the other option might be to send multiple minibatches through the GPU in parallel, and perform one weight update per minibatch. Being able to send the minibatches in parallel would significantly reduce the training time.</p>
<p>Is this possible and recommended?</p> | 2018-08-06 12:57:05.307000+00:00 | 2018-08-06 23:18:50.423000+00:00 | null | tensorflow | ['https://arxiv.org/pdf/1705.08741.pdf', 'https://arxiv.org/abs/1609.04836'] | 2 |
51,774,293 | <p>Looking at <a href="https://arxiv.org/pdf/1409.1556.pdf" rel="nofollow noreferrer">the VGG16 paper</a> and interpreting a bit, I believe the difference is basically in how many times your base network is going to see the input images, and how it will treat them as a result.</p>
<p>According the paper, random scaling is performed on input images during training (scale jitter). If you place your new dense layers on top of the frozen base network and then run the whole stack through a training procedure (second approach), I suppose the assumption is that you would not be disabling the scale jitter mechanism in the base network; thus you would see (different) randomly-scaled versions of each input image each time through your training set (each epoch).</p>
<p>If you run the input images through your base network a single time (first approach), the base is essentially running in an evaluation mode, so it does not scale the input image at all, or do any other sort of image augmentation type of transformation. You <em>could</em> do so yourself to basically add the augmented input images to your newly-transformed dataset. I suppose the book is assuming that you <em>won't</em> do this.</p>
<p>Either way, you would likely end up training on multiple epochs (multiple times through the dataset) so the second approach would carry the added load of executing the whole base network for every training sample for every epoch, whereas the first approach would only require executing the base network once for each sample offline, and then just training on the pre-transformed samples.</p> | 2018-08-09 19:14:00.057000+00:00 | 2018-08-09 19:14:00.057000+00:00 | null | null | 51,773,119 | <p>In the book <a href="https://rads.stackoverflow.com/amzn/click/com/1617294438" rel="nofollow noreferrer" rel="nofollow noreferrer">Deep Learning with Python</a> by François Chollet (creator of Keras), section 5.3 (see the <a href="https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.3-using-a-pretrained-convnet.ipynb" rel="nofollow noreferrer">companion Jupyter notebook</a>), the following is unclear to me:</p>
<blockquote>
<p>Let's put this in practice by using the convolutional base of the VGG16 network, trained on ImageNet, to extract interesting features from our cat and dog images, and then training a cat vs. dog classifier on top of these features.</p>
<p>[...]</p>
<p>There are two ways we could proceed:</p>
<ul>
<li>Running the convolutional base over our dataset, recording its output
to a Numpy array on disk, then using this data as input to a
standalone densely-connected classifier similar to those you have seen
in the first chapters of this book. This solution is very fast and
cheap to run, because it only requires running the convolutional base
once for every input image, and the convolutional base is by far the
most expensive part of the pipeline. <strong>However, for the exact same
reason, this technique would not allow us to leverage data
augmentation at all</strong>.</li>
<li>Extending the model we have (conv_base) by adding
Dense layers on top, and running the whole thing end-to-end on the
input data. This allows us to use data augmentation, because every
input image is going through the convolutional base every time it is
seen by the model. However, for this same reason, this technique is
far more expensive than the first one.</li>
</ul>
</blockquote>
<p>Why can't we augment our data (generate more images from the existing data), run the convolutional base over the augmented dataset (one time), record its output and then use this data as input to a standalone fully-connected classifier?</p>
<p>Wouldn't it give similar results to the second alternative but be faster?</p>
<p>What am I missing?</p> | 2018-08-09 17:48:10.727000+00:00 | 2018-08-10 15:28:51.973000+00:00 | 2020-06-20 09:12:55.060000+00:00 | machine-learning|neural-network|keras|deep-learning|conv-neural-network | ['https://arxiv.org/pdf/1409.1556.pdf'] | 1 |
72,051,728 | <p>In more recent work [1], it was found that you can use LayerNorm in CNNs without degrading accuracy, though it depends on the model architecture. Liu et al. [1] found while developing ConvNeXt that "Directly substituting LN for BN in the original ResNet will result in suboptimal performance" but they observed that their ConvNeXt "model does not have any difficulties training with LN; in fact, the performance is slightly
better".</p>
<p>It would be great if there were a better explanation as to why...</p>
<ol>
<li>Liu et al. A ConvNet for the 2020s. <a href="https://arxiv.org/pdf/2201.03545.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2201.03545.pdf</a></li>
</ol> | 2022-04-29 00:59:39.730000+00:00 | 2022-04-29 00:59:39.730000+00:00 | null | null | 44,941,231 | <p>I see the Layer Normalization is the modern normalization method than Batch Normalization, and it is very simple to coding in Tensorflow.
But I think the layer normalization is designed for RNN, and the batch normalization for CNN.
Can I use the layer normalization with CNN that process image classification task?
What are the criteria for choosing batch normalization or layer?</p> | 2017-07-06 06:24:10.403000+00:00 | 2022-04-29 00:59:39.730000+00:00 | null | tensorflow|deep-learning|normalization|data-processing|batch-normalization | ['https://arxiv.org/pdf/2201.03545.pdf'] | 1 |
59,343,536 | <p>I never heard the term "differentiable programming" before reading your question, but having used the concepts noted in your references, both from the side of creating code to solve a derivative with <a href="https://www.cs.utexas.edu/users/novak/asg-symdif.html" rel="nofollow noreferrer">Symbolic differentiation</a> and with <a href="https://en.wikipedia.org/wiki/Automatic_differentiation" rel="nofollow noreferrer">Automatic differentiation</a> and having written interpreters and compilers, to me this just means that they have made the ability to calculate the numeric value of the derivative of a function easier. I don't know if they made it a <a href="https://en.wikipedia.org/wiki/First-class_citizen" rel="nofollow noreferrer">First-class citizen</a>, but the new way doesn't require the use of a function/method call; it is done with syntax and the compiler/interpreter hides the translation into calls.</p>
<p>If you look at the <a href="https://github.com/FluxML/Zygote.jl" rel="nofollow noreferrer">Zygote</a> example it clearly shows the use of <a href="http://web.mit.edu/wwmath/calculus/differentiation/notation.html" rel="nofollow noreferrer">prime notation</a></p>
<pre><code>julia> f(10), f'(10)
</code></pre>
<p>Most seasoned programmers would guess what I just noted because there was not a research paper explaining it. In other words it is just that obvious.</p>
<p>Another way to think about it is that if you have ever tried to calculate a derivative in a programming language you know how hard it can be at times and then ask yourself why don't they (the language designers and programmers) just add it into the language. In these cases they did.</p>
<p>What surprises me is how long it to took before derivatives became available via syntax instead of calls, but if you have ever worked with scientific code or coded neural networks at at that level then you will understand why this is a concept that is being touted as something of value.</p>
<p>Also I would not view this as another <a href="https://en.wikipedia.org/wiki/Programming_paradigm" rel="nofollow noreferrer">programming paradigm</a>, but I am sure it will be added to the list.</p>
<blockquote>
<p>How does it relate to automatic differentiation (the two seem conflated a lot of the time)?</p>
</blockquote>
<p>In both cases that you referenced, they use automatic differentiation to calculate the derivative instead of using symbolic differentiation. I do not view <em>differentiable programming</em> and <em>automatic differentiation</em> as being two distinct sets, but instead that <em>differentiable programming</em> has a means of being implemented and the way they chose was to use <em>automatic differentiation,</em> they could have chose <em>symbolic differentiation</em> or some other means.</p>
<p>It seems you are trying to read more into what differential programming is than it really is. It is not a new way of programming, but just a nice feature added for doing derivatives.</p>
<p>Perhaps if they named it <em>differentiable syntax</em> it might have been more clear. The use of the word <em>programming</em> gives it more panache than I think it deserves.</p>
<p>EDIT</p>
<p>After skimming Swift <a href="https://forums.swift.org/t/differentiable-programming-mega-proposal/28547" rel="nofollow noreferrer">Differentiable Programming Mega-Proposal</a> and trying to compare that with the Julia example using Zygote, I would have to modify the answer into parts that talk about Zygote and then switch gears to talk about Swift. They each took a different path, but the commonality and bottom line is that the languages know something about differentiation which makes the job of coding them easier and hopefully produces less errors.</p>
<p>About the Wikipedia quote that</p>
<blockquote>
<p>the programs can be differentiated throughout</p>
</blockquote>
<p>At first reading it seems nonsense or at least lacks enough detail to understand it in context which is why I am sure you asked.</p>
<p>In having many years of digging into what others are trying to communicate, one learns that unless the source has been peer reviewed to take it with a grain of salt, and unless it is absolutely necessary to understand, then just ignore it. In this case if you ignore the sentence most of what your reference makes sense. However I take it that you want an answer, so let's try and figure out what it means.</p>
<p>The key word that has me perplexed is <em>throughout</em>, but since you note the statement came from Wikipedia and in Wikipedia they give three references for the statement, a search of the word <em>throughout</em> appears only in one</p>
<p><a href="https://arxiv.org/pdf/1907.07587.pdf" rel="nofollow noreferrer">∂P: A Differentiable Programming System to Bridge Machine Learning and Scientific Computing</a></p>
<blockquote>
<p>Thus, since our ∂P system does not require primitives to handle new
types, this means that almost all functions and types defined
throughout the language are automatically supported by Zygote, and
users can easily accelerate specific functions as they deem necessary.</p>
</blockquote>
<p>So my take on this is that by going back to the source, e.g. the paper, you can better understand how that percolated up into Wikipedia, but it seems that the meaning was lost along the way.</p>
<p>In this case if you really want to know the meaning of that statement you should ask on the Wikipedia <a href="https://en.wikipedia.org/wiki/Talk:Differentiable_programming" rel="nofollow noreferrer">talk page</a> and ask the author of the statement directly.</p>
<p>Also note that the paper referenced is not peer reviewed, so the statements in there may not have any meaning amongst peers at present. As I said, I would just ignore it and get on with writing wonderful code.</p> | 2019-12-15 11:38:39.653000+00:00 | 2021-01-18 09:47:08.853000+00:00 | 2021-01-18 09:47:08.853000+00:00 | null | 59,338,607 | <p>Native support for differential programming has been added to Swift for the <a href="https://www.tensorflow.org/swift" rel="nofollow noreferrer">Swift for Tensorflow</a> project. Julia has similar with <a href="https://github.com/FluxML/Zygote.jl" rel="nofollow noreferrer">Zygote</a>.</p>
<p>What exactly is differentiable programming?</p>
<ul>
<li><p>what does it enable? <a href="https://en.wikipedia.org/wiki/Differentiable_programming" rel="nofollow noreferrer">Wikipedia</a> says</p>
<blockquote>
<p>the programs can be differentiated throughout</p>
</blockquote>
<p>but what does that mean?</p>
</li>
<li><p>how would one use it (e.g. a simple example)?</p>
</li>
<li><p>and how does it relate to automatic differentiation (the two seem conflated a lot of the time)?</p>
</li>
</ul> | 2019-12-14 19:51:58.243000+00:00 | 2022-03-07 17:17:55.843000+00:00 | 2022-03-07 17:17:55.843000+00:00 | language-agnostic|differentiation|automatic-differentiation | ['https://www.cs.utexas.edu/users/novak/asg-symdif.html', 'https://en.wikipedia.org/wiki/Automatic_differentiation', 'https://en.wikipedia.org/wiki/First-class_citizen', 'https://github.com/FluxML/Zygote.jl', 'http://web.mit.edu/wwmath/calculus/differentiation/notation.html', 'https://en.wikipedia.org/wiki/Programming_paradigm', 'https://forums.swift.org/t/differentiable-programming-mega-proposal/28547', 'https://arxiv.org/pdf/1907.07587.pdf', 'https://en.wikipedia.org/wiki/Talk:Differentiable_programming'] | 9 |
6,641,140 | <p>For an accurate and complete (works with any pair of points) solution
use my geodesic calculator at
<a href="http://geographiclib.sf.net/cgi-bin/GeodSolve" rel="nofollow">http://geographiclib.sf.net/cgi-bin/GeodSolve</a>. The formulas are given in
<a href="http://arxiv.org/abs/1102.1215" rel="nofollow">http://arxiv.org/abs/1102.1215</a>.</p> | 2011-07-10 12:38:05.450000+00:00 | 2014-03-16 20:35:11.233000+00:00 | 2014-03-16 20:35:11.233000+00:00 | null | 6,264,571 | <p>I have two points whose latitude and longitude i know.</p>
<p>How can i calculate the distance(in Km and Miles) between them. What is the formulae?</p> | 2011-06-07 11:34:43.783000+00:00 | 2014-03-16 20:35:11.233000+00:00 | null | distance|latitude-longitude | ['http://geographiclib.sf.net/cgi-bin/GeodSolve', 'http://arxiv.org/abs/1102.1215'] | 2 |
37,633,403 | <p>I think you can adapt the architecture for <a href="https://arxiv.org/pdf/1411.4555.pdf" rel="nofollow">image captioning</a> which uses a CNN for input analysis feeding a RNN for language generation.</p>
<p>For your title generation application, think of it as document captioning instead of image captioning and train a <a href="http://www.nlpr.ia.ac.cn/cip/~liukang/liukangPageFile/Recurrent%20Convolutional%20Neural%20Networks%20for%20Text%20Classification.pdf" rel="nofollow">Recurrent CNN</a> on the document text and the RNN on the title text.</p>
<p>This is pretty ambitious for a beginner, so I suggest you start "simplier" with off-the-shelf-examples of <a href="http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/" rel="nofollow">CNN</a> and <a href="https://www.tensorflow.org/versions/r0.8/tutorials/recurrent/index.html" rel="nofollow">RNN</a> to understand the components and basic concepts of deep learning.</p> | 2016-06-04 18:01:38.213000+00:00 | 2016-06-04 18:19:23.553000+00:00 | 2016-06-04 18:19:23.553000+00:00 | null | 37,628,218 | <p>I'm interested in studying deep learning.
Recently, I've researched text mining using news articles dataset.</p>
<p>I want to extract the important one or two sentences from a body of article.
So, I mimic this problem as approximated version, that is, to find a title of article. </p>
<p>The training examples will be like this:
x is a collection of body of the articles, and
y is a collection of title of the articles.</p>
<p>But, test examples only have x. (without y)
After training a model, I predict the title of test set, only using its body. </p>
<p>How can I build the model?
As beginner of deep learning, I want to have some insight or hint for this problem.</p>
<p>Thanks!</p> | 2016-06-04 08:41:41.213000+00:00 | 2016-06-04 18:19:23.553000+00:00 | null | deep-learning|text-mining|recurrent-neural-network | ['https://arxiv.org/pdf/1411.4555.pdf', 'http://www.nlpr.ia.ac.cn/cip/~liukang/liukangPageFile/Recurrent%20Convolutional%20Neural%20Networks%20for%20Text%20Classification.pdf', 'http://www.wildml.com/2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/', 'https://www.tensorflow.org/versions/r0.8/tutorials/recurrent/index.html'] | 4 |
48,474,711 | <p>This is a well-known problem known as "finding extremal sets"; unfortunately, there is nothing fundamentally faster known than the obvious approach of testing a newly inserted set against all existing sets, but good heuristic improvements exist. Here is a recent paper discussing this problem: <a href="https://arxiv.org/abs/1508.01753" rel="nofollow noreferrer">https://arxiv.org/abs/1508.01753</a></p>
<p>An open-source implementation of a related algorithm:
<a href="https://code.google.com/archive/p/google-extremal-sets/" rel="nofollow noreferrer">https://code.google.com/archive/p/google-extremal-sets/</a></p> | 2018-01-27 10:12:50.387000+00:00 | 2018-01-27 22:43:36.347000+00:00 | 2018-01-27 22:43:36.347000+00:00 | null | 48,447,833 | <p>I'm working on a problem which involves going through a lot of data. To reduce the work (because current calculations take about two weeks of compute time, and I'd like to reduce that dramatically) I came up with an algorithm which would be much faster if it was able to avoid a certain type of duplication. (The current algorithm avoids storing this information because it is too large, unreduced, to fit in memory.)</p>
<p>I have a collection of sets, and I don't want to insert a set <code>A</code> if there is already a set <code>B</code> which is a subset of <code>A</code>. At the moment the sets are represented by integers where individual binary digits represent a particular element being present or absent. In that interpretation the set/integer <code>A</code> should not be inserted if there is already a set/integer <code>B</code> such that <code>(~A) & B</code> is 0, where <code>~</code> is bitwise negation and <code>&</code> is bitwise AND.</p>
<p>For example, if my collection has the following sets</p>
<pre><code>[ {a,b}, {b,c}, {b,d,e} ]
</code></pre>
<p>and I asked to add {b,c,e} it should not be added (since {b,c} is already there) and similarly with {a,b} (since {a,b} is there) but {a,e} should be added.</p>
<p>The numeric equivalent would be starting with</p>
<pre><code>[ `0b11`, `0b110`, `0b11010` ]
</code></pre>
<p>where <code>0b10110</code> is not added since <code>(~0b10110) & 0b110 == 0</code>, <code>0b11</code> is not added since <code>(~0b11) ^ 0b11 == 0</code>, but <code>0b10001</code> can be added.</p>
<p><em>Ideally</em> the structure would prune itself as new sets are added, so if <code>{c}</code> were added all existing sets containing <code>c</code> would be removed. But it's acceptable if it doesn't update in that way as long as I can normalize it to that form in some not-too-expensive way every so often.</p> | 2018-01-25 16:36:41.087000+00:00 | 2018-01-27 22:43:36.347000+00:00 | null | data-structures|language-agnostic|set|bit-manipulation|subset | ['https://arxiv.org/abs/1508.01753', 'https://code.google.com/archive/p/google-extremal-sets/'] | 2 |
71,375,987 | <p>Usually, this is a little bit tricky, let me share with you what is on top of my mind:</p>
<ol>
<li><p>if you just want to get a sense of what's happing inside a simple neural net check out this <a href="https://www.cs.ryerson.ca/%7Eaharley/vis/conv/" rel="nofollow noreferrer">link</a>.</p>
</li>
<li><p>If you want to visualize check <a href="https://github.com/charliedavenport/LeNet-MNIST-Demo" rel="nofollow noreferrer">this repo</a>. you just need to sync the last sections of the notebook with your model, it has a cool animation which you can see for LENET MNIST</p>
</li>
<li><p>More technical concepts getting a sense of how a CNN like model make a decision is covered with topics like XAI, and more specifically <a href="https://arxiv.org/abs/1610.02391" rel="nofollow noreferrer">grad-cam</a></p>
</li>
</ol>
<p>Hope these are helpful.</p> | 2022-03-07 02:53:54.733000+00:00 | 2022-03-07 02:53:54.733000+00:00 | null | null | 71,375,886 | <p>Im working on python deep learning code right now. And I want to know what is going on inside the network I designed. Down here is my sample code Im working on.</p>
<p>My question is, is it possible to see the processed image inside Network? For example, I want to see how my input image changed after "p1" and "p2". Is it possible? If it is possible, how can I see it?</p>
<pre><code>import tensorflow as tf
IMG_WIDTH = 256
IMG_HEIGHT = 256
IMG_CHANNELS = 3
#define input
inputs = tf.keras.layers.Input(shape=(IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS))
# s = tf.keras.layers.Lambda(lambda x: x/255)(inputs)
#define Contraction path
c1_1 = tf.keras.layers.Conv2D(64,(3,3),activation='relu', padding='same')(inputs)
c1_2 = tf.keras.layers.Conv2D(64,(3,3),activation='relu', padding='same')(c1_1)
p1 = tf.keras.layers.MaxPooling2D((2,2), strides = 2)(c1_2)
c2_1 = tf.keras.layers.Conv2D(128,(3,3),activation='relu', padding='same')(p1)
c2_2 = tf.keras.layers.Conv2D(128,(3,3),activation='relu', padding='same')(c2_1)
p2 = tf.keras.layers.MaxPooling2D((2,2), strides = 2)(c2_2)
</code></pre> | 2022-03-07 02:35:28.320000+00:00 | 2022-03-07 02:53:54.733000+00:00 | null | python|tensorflow|deep-learning|layer | ['https://www.cs.ryerson.ca/%7Eaharley/vis/conv/', 'https://github.com/charliedavenport/LeNet-MNIST-Demo', 'https://arxiv.org/abs/1610.02391'] | 3 |
37,213,922 | <p>There are a few answers on SO, such as <a href="https://stackoverflow.com/questions/14106010/parallel-execution-of-random-forest-in-r">parallel execution of random forest in R</a> and <a href="https://stackoverflow.com/questions/7830255/suggestions-for-speeding-up-random-forests/7831848#7831848">Suggestions for speeding up Random Forests</a>, that I would take a look at.</p>
<p>Those posts are helpful, but are a bit older. the <code>ranger</code> package is an especially fast implementation of random forest, so if you are new to this it might be the easiest way to speed up your model training. <a href="http://arxiv.org/pdf/1508.04409.pdf" rel="nofollow noreferrer">Their paper</a> discusses the tradeoffs of some of the available packages - depending on your data size and number of features, which package gives you the best performance will vary.</p> | 2016-05-13 15:29:11.383000+00:00 | 2016-05-13 15:29:11.383000+00:00 | 2017-05-23 12:24:49.367000+00:00 | null | 37,213,279 | <p>Through searching and asking, I've found many packages I can use to make use of all the cores of my server, and many packages that can do random forest. </p>
<p>I'm quite new at this, and I'm getting lost between all the ways to parallelize the training of my random forest. Could you give some advice on reasons to use and/or avoid each of them, or some specific combinations of them (and with or without <code>caret</code> ?) that have made their proof ?</p>
<p>Packages for parallelization : </p>
<p><code>doParallel</code>, </p>
<p><code>doSNOW</code>, </p>
<p><code>doSMP</code> (discontinued ?), </p>
<p><code>doMC</code> </p>
<p>(and what about <code>mclapply</code> ?)</p>
<hr>
<p>Packages for random forest : </p>
<p>[<code>caret</code> + some of the following] </p>
<p><code>rf</code>, </p>
<p><code>parRF</code>, </p>
<p><code>randomForest</code>, </p>
<p><code>ranger</code>, </p>
<p><code>Rborist</code>,</p>
<p><code>parallelRandomForest</code> (crashes my R Studio session...)</p>
<p>Thanks</p> | 2016-05-13 14:55:49.017000+00:00 | 2016-05-13 15:29:11.383000+00:00 | null | r|parallel-processing|random-forest | ['https://stackoverflow.com/questions/14106010/parallel-execution-of-random-forest-in-r', 'https://stackoverflow.com/questions/7830255/suggestions-for-speeding-up-random-forests/7831848#7831848', 'http://arxiv.org/pdf/1508.04409.pdf'] | 3 |
49,250,270 | <p>Although a softmax activation will ensure that the outputs satisfy the surface Kolmogorov axioms (probabilities always sum to one, no probability below zero and above one) and the individual values can be seen as a measure of the network's confidence, you would need to calibrate the model (train it not as a classifier but rather as a probability predictor) or use a bayesian network before you could formally claim that the output values are your per-class prediction confidences. (<a href="https://arxiv.org/pdf/1706.04599.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1706.04599.pdf</a>)</p> | 2018-03-13 07:14:30.943000+00:00 | 2018-03-13 07:14:30.943000+00:00 | null | null | 49,247,891 | <p>If there are 4 classes and output probability from the model is A=0.30,B=0.40,C=0.20 D=0.10 then can I say that output from the model is class B with 40% confidence? If not then why?</p> | 2018-03-13 03:33:54.150000+00:00 | 2018-03-13 07:14:30.943000+00:00 | 2018-03-13 04:52:47.710000+00:00 | machine-learning|neural-network|deep-learning|xgboost | ['https://arxiv.org/pdf/1706.04599.pdf'] | 1 |
71,784,084 | <p>It depends on the queries you want to do on the time series data, but I suspect the answer is <strong>NO</strong>.</p>
<p>Typical queries on time series data include the following:</p>
<ul>
<li>moving averages; e.g. 30 day average of stock prices</li>
<li>median</li>
<li>accounting functions; e.g. average growth rate, amortization, internal rate of return and so on.</li>
<li>statistical functions; e.g. autocorrelation, and correlation between two series.</li>
<li>pattern finding; i.e. find a time series (or multiple time series) that has a similar pattern to this time series</li>
</ul>
<p>In general time series data have a greater need for aggregation of a collection of data rather creating a graph of the data. This will likely cause any time series related queries to have poor performance on a graph like database.</p>
<p>A factor to consider is that the amount of data stored for time series can be way bigger than that for of a typical knowledge graph depending on the sample rate of the time series data.</p>
<p>Here are some of the references that brought me to this conclusion:</p>
<ol>
<li><a href="http://www.it.uu.se/research/group/udbl/Theses/HenrikAndreJonssonPhD.pdf" rel="nofollow noreferrer">Indexing Strategies for Time Series Data</a></li>
<li><a href="https://arxiv.org/abs/1910.09017" rel="nofollow noreferrer">Demystifying Graph Databases - Analysis and Taxonomy of Data Organization, System Designs, and Graph Queries</a></li>
</ol> | 2022-04-07 14:31:29.817000+00:00 | 2022-04-07 14:31:29.817000+00:00 | null | null | 71,780,910 | <p>Would storing time series data in a Knowledge Graph be a good idea ? What could be the benefits of doing so ?</p> | 2022-04-07 10:58:28.997000+00:00 | 2022-04-07 14:31:29.817000+00:00 | null | knowledge-graph | ['http://www.it.uu.se/research/group/udbl/Theses/HenrikAndreJonssonPhD.pdf', 'https://arxiv.org/abs/1910.09017'] | 2 |
41,971,995 | <p><a href="https://arxiv.org/pdf/1511.06939.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1511.06939.pdf</a></p>
<p>In this paper, recall is calculated as "proportion of cases having the desired item amongst the top-k items in all test cases."</p> | 2017-02-01 04:38:29.437000+00:00 | 2017-02-01 04:38:29.437000+00:00 | null | null | 41,971,925 | <p>As I know, In Top-N Recommendation System, the formular of Recall is as below :</p>
<pre><code>recall = |{A} and {B}| / |{A}|
</code></pre>
<p><strong>where {A} are the things that user actually bought, {B} are the Top-N things that system recommended.</strong></p>
<p>But in RNN based recommendation system, it is a little different from the traditional recommendation system such as kNN based recommendation system (user based or item based system).</p>
<p><strong>The target of RNN based recommendation system is to predict the thing that user would probably buy in next time "t+1". In each step, system will give a Top-N recommendation.</strong> Reference paper:<a href="https://arxiv.org/abs/1608.07400" rel="nofollow noreferrer">enter link description here</a></p>
<p>So how to caclulate Recall for Recurrent Neural Network (RNN) based Recommendation System?</p> | 2017-02-01 04:30:11.133000+00:00 | 2018-04-02 09:55:17.733000+00:00 | 2018-04-02 09:55:17.733000+00:00 | deep-learning|recommendation-engine|recurrent-neural-network|collaborative-filtering|precision-recall | ['https://arxiv.org/pdf/1511.06939.pdf'] | 1 |
59,658,911 | <p>I have since found an answer to this question by looking into the paper EMNIST: an extension of MNIST to handwritten letters by G. Cohen (available here: <a href="https://arxiv.org/pdf/1702.05373v1.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1702.05373v1.pdf</a>). </p>
<p>This explains that many letters have problems in character recognition that the upper and lower case variants are very similar. This causes problems in trying to classify these letters. To counteract this they have merged the letters they thought this was a problem for.</p>
<p>From the paper:</p>
<blockquote>
<p>The merged classes, as suggested by the NIST, are for the letters C,
I, J, K, L, M, O, P, S, U, V, W, X, Y and Z.</p>
</blockquote>
<p>This accounts for the missing classes (although I would have liked to see a 62 balanced class option or a 36 class option with all the letters merged).</p> | 2020-01-09 07:15:29.043000+00:00 | 2020-01-09 07:15:29.043000+00:00 | null | null | 59,638,769 | <p>I am using EMNIST as a dataset for a text detection and recognition using deep learning. I downloaded the datasets from <a href="https://pypi.org/project/emnist/" rel="nofollow noreferrer">https://pypi.org/project/emnist/</a> (using <code>pip install emnist</code>). The datasets are from <a href="https://www.nist.gov/itl/products-and-services/emnist-dataset" rel="nofollow noreferrer">https://www.nist.gov/itl/products-and-services/emnist-dataset</a> it describes them as follows:</p>
<blockquote>
<p>EMNIST ByClass: 814,255 characters. 62 unbalanced classes.</p>
<p>EMNIST ByMerge: 814,255 characters. 47 unbalanced classes.</p>
<p>EMNIST Balanced: 131,600 characters. 47 balanced classes.</p>
<p>EMNIST Letters: 145,600 characters. 26 balanced classes.</p>
<p>EMNIST Digits: 280,000 characters. 10 balanced classes.</p>
<p>EMNIST MNIST: 70,000 characters. 10 balanced classes.</p>
</blockquote>
<p>Most of these make sense for example 62 classes is made up of 10 digits, 26 capital letters and 26 lower case. But for ByMerge and Balanced we have 47.</p>
<p>I have looked into the data myself and find 10 digits, 26 letters (mixture of uppercase and lowercase) and then as far as I can tell the remaining 11 are random lowercase letters ('a','b','d','e','f','g','h','n','q','r','t').</p>
<p>Does anyone know why these extra 11 have been specifically included?</p> | 2020-01-08 02:48:04.370000+00:00 | 2020-07-19 06:22:51.903000+00:00 | 2020-06-20 09:12:55.060000+00:00 | deep-learning|dataset | ['https://arxiv.org/pdf/1702.05373v1.pdf'] | 1 |
66,468,394 | <p>The following definition should meet these terms:</p>
<p>"""
...many-shot classes (classes each with
over training 100 samples), medium-shot classes (classes
each with 20∼100 training samples) and few-shot classes
(classes under 20 training samples)
"""</p>
<p>source: <a href="https://arxiv.org/pdf/1904.05160.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1904.05160.pdf</a></p> | 2021-03-04 04:01:33.743000+00:00 | 2021-03-04 04:01:33.743000+00:00 | null | null | 65,956,490 | <p>While I was reading one CVPR 2020 paper, titled <em><strong>Equalization loss for long-tailed object recognition</strong></em>, I cannot understand the terms "many shot", "medium shot", and "few shot". Could you give me an advice for understanding those terms?</p>
<p><a href="https://i.stack.imgur.com/l8Xah.png" rel="nofollow noreferrer">Click 1</a></p>
<p><a href="https://i.stack.imgur.com/oZ6Rg.png" rel="nofollow noreferrer">Click 2</a></p> | 2021-01-29 14:38:56.997000+00:00 | 2021-03-04 04:01:33.743000+00:00 | null | loss | ['https://arxiv.org/pdf/1904.05160.pdf'] | 1 |
62,429,447 | <p>If you have a fixed object with different shapes and movements, pair-wise- or multi-matching can be a helpful solution for you. For example see <a href="https://arxiv.org/pdf/1811.10541.pdf" rel="nofollow noreferrer">this paper</a>. This method can be extended for higher-dimensions as well.</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/Ndlf9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ndlf9.png" alt="enter image description here"></a></p>
</blockquote>
<p>If you have two different sets of points that come from different objects and you find the similarity between them, one solution can be computing <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.937&rep=rep1&type=pdf" rel="nofollow noreferrer">discrete Frechet distance</a> in both sets of points and then compare their value.</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/EexZp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EexZp.png" alt="enter image description here"></a></p>
</blockquote>
<p>The other related concept is <strong>Shape Reconstruction</strong>. You can mix the result of a proper shape reconstruction algorithm with two previous methods to compute the similarity:</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/Ffn6Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ffn6Q.png" alt="enter image description here"></a></p>
</blockquote> | 2020-06-17 12:51:15.710000+00:00 | 2020-06-17 13:03:12.910000+00:00 | 2020-06-17 13:03:12.910000+00:00 | null | 62,427,278 | <p>I have 2 sets of points in 3D have the same count, I want to know if the have the same pattern, I thought I may project them on XZ,XY and YZ planes then compare the projections in each plane but I am not sure how to do this, I thought the convex hull may help but it won't be accurate.
Is there an easy algorithm to do that? the complexity is not a big issue so far as the points count will be tiny, I implement in Java.</p>
<p>Can I solve this in 3D direct with the same algorithm ?</p>
<p>The attached image shows an example of what I mean.</p>
<p>Edit:
No guarantee for order.
No scale, there are rotation and translation only.</p>
<p><a href="https://i.stack.imgur.com/EUbL4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EUbL4.png" alt="enter image description here"></a></p> | 2020-06-17 10:52:24.783000+00:00 | 2020-06-19 09:10:54.247000+00:00 | 2020-06-17 13:53:58.283000+00:00 | algorithm|computational-geometry|similarity | ['https://arxiv.org/pdf/1811.10541.pdf', 'https://i.stack.imgur.com/Ndlf9.png', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.937&rep=rep1&type=pdf', 'https://i.stack.imgur.com/EexZp.png', 'https://i.stack.imgur.com/Ffn6Q.png'] | 5 |
45,439,014 | <p>I ran into the same problem when I was trying to understand skip-gram this week. I went through which seemed to be the entire Internet without finding an answer. Fortunately I was able to figure it out.</p>
<p>Firstly, the outputs you mentioned in your question <strong>are indeed the same</strong>. You are right about that. But it still makes sense, because the reason we have say <strong>n</strong> output vectors, is that we have n words in the skip-gram window. Each output is going to be compared with a different word in this window, and we are going to compute their errors individually. Then we update the matrices with back-propagation.</p>
<p>I strongly recommend you to read this article: <a href="https://arxiv.org/pdf/1411.2738.pdf" rel="nofollow noreferrer">word2vec Parameter Learning Explained</a>. It will explain all your problems concerning the basics of word2vec.</p>
<p>Cheers!</p> | 2017-08-01 13:19:06.450000+00:00 | 2017-08-01 13:19:06.450000+00:00 | null | null | 42,964,840 | <p>If the all weight matrices between the hidden and output layer in Skip gram Word2vec model are the same, how are the outputs different one from another?</p> | 2017-03-22 23:57:26.500000+00:00 | 2017-11-21 17:11:41.847000+00:00 | 2017-03-23 16:47:15.687000+00:00 | machine-learning|neural-network|word2vec | ['https://arxiv.org/pdf/1411.2738.pdf'] | 1 |
70,188,031 | <p>To get only the second to last chunks of the domain, you could modify your regex to have:</p>
<pre><code>[re.search('https?://(?:[^/]+\.)?([A-Za-z_0-9-]+)\.[^/.]+(?:/.*)?', url).group(1)
for url in urls]
</code></pre>
<p>Output:</p>
<pre><code>['arxiv', 'doi', 'scopus']
</code></pre>
<h5>urllib</h5>
<p>@AbdulNiyasPM had a nice answer, too bad it was deleted, you can modify it to get what you want:</p>
<pre><code>from urllib.parse import urlparse
[urlparse(url).hostname.split('.')[-2]
for url in urls]
</code></pre> | 2021-12-01 16:41:43.947000+00:00 | 2021-12-01 16:47:47.353000+00:00 | 2021-12-01 16:47:47.353000+00:00 | null | 70,187,833 | <p>I have a following list of URLs:</p>
<pre><code>urls = ["http://arxiv.org/pdf/1611.08097", "https://doi.org/10.1109/tkde.2016.2598561", "https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85116544648&origin=inward"]
</code></pre>
<p>from each element of the list, I am trying to extract just the domain names like: <code>arxiv</code>, <code>doi</code>, <code>scopus</code>.</p>
<p>For that I have a code:</p>
<pre><code>import re
for url in urls:
print(re.search('https?://([A-Za-z_0-9.-]+).*', url).group(1))
</code></pre>
<p>The output of print:</p>
<pre><code>arxiv.org
doi.org
www.scopus.com
</code></pre>
<p>How can I modify the above regex to extract just the domain and no other stuff like <code>www.</code>, <code>.com</code>, <code>.org</code> etc?</p>
<p>Thanks in advance.</p> | 2021-12-01 16:28:09.230000+00:00 | 2021-12-01 16:47:47.353000+00:00 | null | python|python-3.x|regex | [] | 0 |
45,243,642 | <p>If you are looking for some quick code which runs in CPU, take a look at Drew-NF. This is a python implementation of the neural network discussed in the paper <a href="https://arxiv.org/pdf/1604.02532.pdf" rel="nofollow noreferrer">Tubelets with
Convolutional Neural Networks for Object Detection from Videos</a>.
To Run the script you need:</p>
<ol>
<li><p>Tensorflow</p></li>
<li><p>OpenCV</p></li>
</ol>
<p><a href="https://github.com/DrewNF/Tensorflow_Object_Tracking_Video" rel="nofollow noreferrer">DrewNF Github Repo</a> </p> | 2017-07-21 17:38:25.617000+00:00 | 2017-07-21 17:38:25.617000+00:00 | null | null | 36,602,093 | <p>I'm trying to tracking people in the video. But I can not find a suitable algorithm that would behave similarly to <a href="https://www.youtube.com/watch?v=Qjr3RYecv3U" rel="nofollow">https://www.youtube.com/watch?v=Qjr3RYecv3U</a>.</p>
<p>I tried template matching in combination with optical flow, but always lose the tracked object if it overlaps another object. Could someone recommend a suitable method for tracking?</p>
<p>I am using Python and OpenCV.</p> | 2016-04-13 14:49:30.577000+00:00 | 2017-07-21 17:38:25.617000+00:00 | null | python|python-2.7|opencv|tracking | ['https://arxiv.org/pdf/1604.02532.pdf', 'https://github.com/DrewNF/Tensorflow_Object_Tracking_Video'] | 2 |
45,177,006 | <p>The <a href="http://image-net.org/challenges/LSVRC/2017/results" rel="nofollow noreferrer">results</a> of the ILSVRC 2017 competition were released yesterday (July 17, 2017). The winner in the two tracking categories, Task 3c (Object detection/tracking from video with provided training data) and Task 3d (Object detection/tracking from video with additional training data), was this team:</p>
<p>Jiankang Deng(1), Yuxiang Zhou(1), Baosheng Yu(2), Zhe Chen(2), Stefanos Zafeiriou(1), Dacheng Tao(2), (1)Imperial College London, (2)University of Sydney</p>
<p>Here are their publications, source code, and a presentation:
[1] <a href="https://arxiv.org/abs/1611.07715" rel="nofollow noreferrer">Deep Feature Flow for Video Recognition</a>
Xizhou Zhu, Yuwen Xiong, Jifeng Dai, Lu Yuan, and Yichen Wei, IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. </p>
<p>[2] <a href="https://arxiv.org/pdf/1703.10025.pdf" rel="nofollow noreferrer">Flow-Guided Feature Aggregation for Video Object Detection</a>, Xizhou Zhu, Yujie Wang, Jifeng Dai, Lu Yuan, and Yichen Wei. Arxiv tech report, 2017.</p>
<p>Presentation
<a href="https://www.youtube.com/watch?v=J0rMHE6ehGw" rel="nofollow noreferrer">https://www.youtube.com/watch?v=J0rMHE6ehGw</a></p>
<p>Source Code
<a href="https://github.com/msracver/Deep-Feature-Flow" rel="nofollow noreferrer">https://github.com/msracver/Deep-Feature-Flow</a></p>
<p>The code has the following prerequisites:</p>
<ul>
<li>Python 3.2.0+</li>
<li>Microsoft's MXNet</li>
<li>Cython</li>
<li>OpenCV (Python bindings)</li>
</ul>
<p>Their code requires a GPU with at least 6GB of memory.</p>
<p>Another option is <a href="http://guanghan.info/projects/ROLO/" rel="nofollow noreferrer">ROLO</a>. The author is Guanghan Ning and he uses You Only Look Once (YOLO) for detection and uses TensorFlow to implement LSTMs for tracking.</p>
<p>His published a paper:
<a href="https://arxiv.org/abs/1607.05781" rel="nofollow noreferrer">Spatially Supervised Recurrent Convolutional Neural Networks for Visual Object Tracking</a>, IEEE International Symposium on Circuits and Systems, 2017</p>
<p>His code is here: <a href="https://github.com/Guanghan/ROLO" rel="nofollow noreferrer">https://github.com/Guanghan/ROLO</a></p>
<p>Project page: <a href="http://guanghan.info/projects/ROLO/" rel="nofollow noreferrer">http://guanghan.info/projects/ROLO/</a></p>
<p>Prerequisites:</p>
<ul>
<li>Python 2.7 or 3.3+</li>
<li>TensorFlow</li>
<li>Scipy</li>
<li>OpenCV (Python bindings)</li>
</ul>
<p>Some videos of his work:</p>
<ul>
<li><a href="https://www.youtube.com/watch?v=qElDUVmYSpY" rel="nofollow noreferrer">ROLO example on unseen sequence: Surfer</a></li>
<li><a href="https://www.youtube.com/watch?v=7dDsvVEt4ak" rel="nofollow noreferrer">ROLO example on unseen sequence: Boy</a></li>
<li><a href="https://www.youtube.com/watch?v=w7Bxf4guddg" rel="nofollow noreferrer">ROLO example on unseen sequence: Jumping</a></li>
</ul> | 2017-07-18 21:02:27.837000+00:00 | 2017-07-18 21:52:28.660000+00:00 | 2017-07-18 21:52:28.660000+00:00 | null | 36,602,093 | <p>I'm trying to tracking people in the video. But I can not find a suitable algorithm that would behave similarly to <a href="https://www.youtube.com/watch?v=Qjr3RYecv3U" rel="nofollow">https://www.youtube.com/watch?v=Qjr3RYecv3U</a>.</p>
<p>I tried template matching in combination with optical flow, but always lose the tracked object if it overlaps another object. Could someone recommend a suitable method for tracking?</p>
<p>I am using Python and OpenCV.</p> | 2016-04-13 14:49:30.577000+00:00 | 2017-07-21 17:38:25.617000+00:00 | null | python|python-2.7|opencv|tracking | ['http://image-net.org/challenges/LSVRC/2017/results', 'https://arxiv.org/abs/1611.07715', 'https://arxiv.org/pdf/1703.10025.pdf', 'https://www.youtube.com/watch?v=J0rMHE6ehGw', 'https://github.com/msracver/Deep-Feature-Flow', 'http://guanghan.info/projects/ROLO/', 'https://arxiv.org/abs/1607.05781', 'https://github.com/Guanghan/ROLO', 'http://guanghan.info/projects/ROLO/', 'https://www.youtube.com/watch?v=qElDUVmYSpY', 'https://www.youtube.com/watch?v=7dDsvVEt4ak', 'https://www.youtube.com/watch?v=w7Bxf4guddg'] | 12 |
65,296,305 | <p>The first answer is from 2015 and a bit of age.</p>
<p>Today, CNNs typically also use batchnorm - while there is some debate why this helps: the inventors mention covariate shift: <a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">https://arxiv.org/abs/1502.03167</a>
There are other theories like smoothing the loss landscape: <a href="https://arxiv.org/abs/1805.11604" rel="nofollow noreferrer">https://arxiv.org/abs/1805.11604</a></p>
<p>Either way, it is a method that helps to deal significantly with vanishing/exploding gradient problem that is also relevant for CNNs. In CNNs you also apply the chain rule to get gradients. That is the update of the first layer is proportional to the product of N numbers, where N is the number of inputs. It is very likely that this number is either relatively big or small compared to the update of the last layer. This might be seen by looking at the variance of a product of random variables that quickly grows the more variables are being multiplied: <a href="https://stats.stackexchange.com/questions/52646/variance-of-product-of-multiple-random-variables">https://stats.stackexchange.com/questions/52646/variance-of-product-of-multiple-random-variables</a></p>
<p>For recurrent networks that have long sequences of inputs, ie. of length L, the situation is often worse than for CNN, since there the product consists of L numbers. Often the sequence length L in a RNN is much larger than the number of layers N in a CNN.</p> | 2020-12-14 20:49:28.590000+00:00 | 2020-12-14 20:49:28.590000+00:00 | null | null | 28,953,622 | <p>I think I read somewhere that convolutional neural networks do not suffer from the vanishing gradient problem as much as standard sigmoid neural networks with increasing number of layers. But I have not been able to find a 'why'.</p>
<p>Does it truly not suffer from the problem or am I wrong and it depends on the activation function?
[I have been using Rectified Linear Units, so I have never tested the Sigmoid Units for Convolutional Neural Networks]</p> | 2015-03-09 23:30:02.800000+00:00 | 2020-12-14 20:49:28.590000+00:00 | null | machine-learning|neural-network|classification|conv-neural-network | ['https://arxiv.org/abs/1502.03167', 'https://arxiv.org/abs/1805.11604', 'https://stats.stackexchange.com/questions/52646/variance-of-product-of-multiple-random-variables'] | 3 |
62,000,562 | <p>This is called "Squeeze-and-Excitation" or "SE" block (see the <a href="https://arxiv.org/pdf/1709.01507.pdf" rel="nofollow noreferrer">paper</a> by Hu et. al). The target of this block is to weight the channels of the previous layer, based on some "global" understanding of each channel's importance and dependencies between channels. See the following figure (from the paper):</p>
<p><a href="https://i.stack.imgur.com/df63P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/df63P.png" alt="figure 1 from the "Squeeze-and-Excitation" paper"></a></p>
<p>and in details, the difference between residual connection and "SE" connection is (again, figure from the paper):</p>
<p><a href="https://i.stack.imgur.com/6uGDc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6uGDc.png" alt="figure 3 from the "Squeeze-and-Excitation" paper"></a></p>
<p>Specifically in the grpah you sent, it seems that they use 1x1 pointwize convolutions instead of a fully-connected layers, but the idea is similar.</p> | 2020-05-25 10:30:07.353000+00:00 | 2020-05-25 11:20:31.430000+00:00 | 2020-05-25 11:20:31.430000+00:00 | null | 62,000,175 | <p>I know about the residual mapping proposed by <a href="https://arxiv.org/pdf/1512.03385.pdf" rel="nofollow noreferrer">He et al.</a> But recently I came across this kind of mapping in the EfficientNetB0 architecture,
<a href="https://i.stack.imgur.com/h5NDO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h5NDO.png" alt="enter image description here"></a></p>
<p>The residual units add the previous mapping to the processed block, but here they're multiplying. Could someone explain the significance of this operation and what this mapping is called? Can you link a few papers which describe this?</p> | 2020-05-25 10:08:45.453000+00:00 | 2020-05-25 11:20:31.430000+00:00 | null | python|tensorflow|keras|conv-neural-network | ['https://arxiv.org/pdf/1709.01507.pdf', 'https://i.stack.imgur.com/df63P.png', 'https://i.stack.imgur.com/6uGDc.png'] | 3 |
65,386,205 | <p>According to <a href="https://arxiv.org/pdf/1011.0317.pdf" rel="nofollow noreferrer">Negative translations not intuitionistically
equivalent to the usual ones</a>,</p>
<blockquote>
<p>The image of the usual negative translations is (essentially) the negative fragment
NF, that is the set of all formulas without ∨ and ∃ and whose atomic formulas are
all negated</p>
</blockquote>
<p>If you look at the rules given at page 3 (or <a href="https://plato.stanford.edu/entries/logic-intuitionistic/#TraClaIntLog" rel="nofollow noreferrer">here</a>), it should be unsurprising that the translation is called negative. The fragment as defined by Harper removes the requirement that</p>
<blockquote>
<p>atomic formulas are
all negated</p>
</blockquote> | 2020-12-21 00:33:18.353000+00:00 | 2020-12-21 00:33:18.353000+00:00 | null | null | 65,383,458 | <p>I have been looking into intuitionistic logic and what is called "negative fragment" of intuitionistic propositional logic. However, I was not able to find any resource that explains the reason why it is called "negative fragment".</p>
<p>Any references/suggestions?</p> | 2020-12-20 18:38:31.560000+00:00 | 2020-12-21 00:33:18.353000+00:00 | null | logic|dependent-type|type-theory | ['https://arxiv.org/pdf/1011.0317.pdf', 'https://plato.stanford.edu/entries/logic-intuitionistic/#TraClaIntLog'] | 2 |
61,470,049 | <p>In general HLLs are <strong>not</strong> GDPR compliant. This issue was somewhat addressed in a recent <a href="https://arxiv.org/pdf/1808.05879.pdf" rel="nofollow noreferrer">Google paper</a> (see Section 8: 'Mitigation strategies').</p>
<p>The hash function used in HLL are usually not cryptographically secure (usually MurmurHash), hence even with salting you might still be able to answer the question "is a user part of a HLL data structure or not" and that's a no no.</p>
<p>Afaik you would be in compliance if you keep HLLs around for longer than 30 days <strong>iff</strong> you apply a salted crypto hash prior to HLL aggregation (i.e. a salted SHA-2 or BLAKE2b, BLAKE3) and you destroy the salt after each <30 day period. This would allow you to keep <30 day intervals. You would not be able to merge HLLs over several intervals but only over 28 day chunks, but that can still be super valuable dependent on your business needs.</p> | 2020-04-27 23:16:05.690000+00:00 | 2020-04-27 23:16:05.690000+00:00 | null | null | 57,000,767 | <p>Context: we have telemetry system for our service and would like to track retention, how many users use various features, etc.</p>
<p>There are two options to deal with user identifiable information and be GDPR compliant:</p>
<ol>
<li>Support deleting user information based on request</li>
<li>Keep data for less than 30 days</li>
</ol>
<p>Option #1 is hard to implement (for telemetry system). Option #2 doesn't allow answering questions such as "what is 6-month retention for feature X?".</p>
<p>One idea how to get answers for above question is to calculate HyperLogLog blobs per feature every week/day and store them separately forever. This will allow moving forward to merge/dcount/calculate retention based on these blobs.</p>
<p>Assuming that any user identifiable information is gone after 30 days (after user account gets deleted), will HyperLogLog blobs still allow to track users or not (i.e. to answer whether a particular user used feature X two years ago)?</p>
<p>If it allows then it is not compliant (doesn't mean that it is compliant if it doesn't allow).</p> | 2019-07-12 05:41:33.757000+00:00 | 2020-04-27 23:16:05.690000+00:00 | null | hyperloglog | ['https://arxiv.org/pdf/1808.05879.pdf'] | 1 |
42,907,637 | <p>There is no problem by combining Batch Normalization with ReLUs, this is done very often with no problems. For example the first paper about <a href="https://arxiv.org/abs/1512.03385" rel="nofollow noreferrer">Residual Networks</a> does this and obtains very good results in ImageNet classification.</p> | 2017-03-20 15:30:01.233000+00:00 | 2017-03-20 15:46:25.667000+00:00 | 2017-03-20 15:46:25.667000+00:00 | null | 42,899,311 | <p>In <a href="https://datascience.stackexchange.com/questions/14352/how-are-deep-learning-nns-different-now-2016-from-the-ones-i-studied-just-4-ye">https://datascience.stackexchange.com/questions/14352/how-are-deep-learning-nns-different-now-2016-from-the-ones-i-studied-just-4-ye</a> I was told that one should use Batch normalization:</p>
<blockquote>
<p>It's been known for a while that NNs train best on data that is normalized --- i.e., there is zero mean and unit variance.</p>
</blockquote>
<p>I was also told one should use ReLu neurons - especially if the inputs are images. Images usually have numbers between 0 and 1 or 0 and 255.</p>
<p><strong>Question:</strong> Is it wise to combine ReLus with Batch normalizsation?</p>
<p>I would imagine that if I do first Batch normalization, I fear, one might loose information once it passes the ReLus.of have of my information once</p> | 2017-03-20 08:57:27.830000+00:00 | 2017-03-20 15:46:25.667000+00:00 | 2017-04-13 12:50:40.647000+00:00 | neural-network|deep-learning | ['https://arxiv.org/abs/1512.03385'] | 1 |
39,916,213 | <p>HyperNEAT is primarily a tool for medical applications. A typical setup is to use a EPOC Headset (that is hardware for detecting EEG waves from the brain) together with a opensource software parser Emokit <a href="http://mooc.ee/MTAT.03.291/2014_spring/uploads/Main/Signal%20Quality%20and%20Data%20Visualizer%20for%20Emotiv%20EPOC.pdf" rel="nofollow">Signal Quality and Data Visualizer for Emotiv EPOC</a>. In the above paper only the Fast Fourier Transform is used for analyzing signals, and here comes HyperNEAT into the game. HyperNEAT can be trained in a way that it can interpret EEG signals better. <a href="https://daim.idi.ntnu.no/masteroppgaver/007/7564/masteroppgave.pdf" rel="nofollow">Emotion Recognition in EEG</a> The CPPN submodul is for translating brainwaves into visual attractive patterns <a href="https://arxiv.org/pdf/1304.4889.pdf" rel="nofollow">Hands-free Evolution of 3D-printable Objects via Eye Tracking</a></p> | 2016-10-07 11:32:19.217000+00:00 | 2016-10-07 11:32:19.217000+00:00 | null | null | 39,872,707 | <p>I've been messing around with HyperNEAT and ran into a slight issue. From what I understand, the substrate is the initial layout of nodes which are subsequently used to query a CPPN to provide connection weights. I understand that the CPPN activation functions are just the set of activation functions that can appear in each node in the CPPN, but what do the substrate activation functions refer to? I was under the impression that the substrate is not necessarily a network but just a layout used to incorporate the geometry of the problem into the CPPN's pattern producing abilities. So where do substrate activation functions come in?</p>
<p>EDIT: I'm using <a href="https://github.com/lordjesus/UnityNEAT" rel="nofollow">UnityNEAT</a> which is a port of <a href="http://sharpneat.sourceforge.net/" rel="nofollow">SharpNEAT</a> to Unity.</p>
<p>Thanks</p> | 2016-10-05 11:26:18.890000+00:00 | 2016-11-11 16:36:06.440000+00:00 | 2016-10-08 07:56:54.893000+00:00 | machine-learning|neural-network|artificial-intelligence|es-hyperneat | ['http://mooc.ee/MTAT.03.291/2014_spring/uploads/Main/Signal%20Quality%20and%20Data%20Visualizer%20for%20Emotiv%20EPOC.pdf', 'https://daim.idi.ntnu.no/masteroppgaver/007/7564/masteroppgave.pdf', 'https://arxiv.org/pdf/1304.4889.pdf'] | 3 |
47,236,466 | <p>This problem has been solved in issue <a href="https://github.com/tensorflow/tensorflow/issues/14451#event-1336837542" rel="noreferrer">#14451</a>.
Just posting the anwser here to make it more visible to other developers.</p>
<p>The sample code is oversampling low frequent classes and undersampling high frequent ones, where <code>class_target_prob</code> is just uniform distribution in my case. I wanted to check some conclusions from recent manuscript <a href="https://arxiv.org/abs/1710.05381" rel="noreferrer">A systematic study of the class imbalance problem in convolutional neural networks</a></p>
<p>The oversampling of specific classes is done by calling:</p>
<pre><code>dataset = dataset.flat_map(
lambda x: tf.data.Dataset.from_tensors(x).repeat(oversample_classes(x))
)
</code></pre>
<p>Here is the full snippet which does all the things:</p>
<pre><code># sampling parameters
oversampling_coef = 0.9 # if equal to 0 then oversample_classes() always returns 1
undersampling_coef = 0.5 # if equal to 0 then undersampling_filter() always returns True
def oversample_classes(example):
"""
Returns the number of copies of given example
"""
class_prob = example['class_prob']
class_target_prob = example['class_target_prob']
prob_ratio = tf.cast(class_target_prob/class_prob, dtype=tf.float32)
# soften ratio is oversampling_coef==0 we recover original distribution
prob_ratio = prob_ratio ** oversampling_coef
# for classes with probability higher than class_target_prob we
# want to return 1
prob_ratio = tf.maximum(prob_ratio, 1)
# for low probability classes this number will be very large
repeat_count = tf.floor(prob_ratio)
# prob_ratio can be e.g 1.9 which means that there is still 90%
# of change that we should return 2 instead of 1
repeat_residual = prob_ratio - repeat_count # a number between 0-1
residual_acceptance = tf.less_equal(
tf.random_uniform([], dtype=tf.float32), repeat_residual
)
residual_acceptance = tf.cast(residual_acceptance, tf.int64)
repeat_count = tf.cast(repeat_count, dtype=tf.int64)
return repeat_count + residual_acceptance
def undersampling_filter(example):
"""
Computes if given example is rejected or not.
"""
class_prob = example['class_prob']
class_target_prob = example['class_target_prob']
prob_ratio = tf.cast(class_target_prob/class_prob, dtype=tf.float32)
prob_ratio = prob_ratio ** undersampling_coef
prob_ratio = tf.minimum(prob_ratio, 1.0)
acceptance = tf.less_equal(tf.random_uniform([], dtype=tf.float32), prob_ratio)
return acceptance
dataset = dataset.flat_map(
lambda x: tf.data.Dataset.from_tensors(x).repeat(oversample_classes(x))
)
dataset = dataset.filter(undersampling_filter)
dataset = dataset.repeat(-1)
dataset = dataset.shuffle(2048)
dataset = dataset.batch(32)
sess.run(tf.global_variables_initializer())
iterator = dataset.make_one_shot_iterator()
next_element = iterator.get_next()
</code></pre>
<h2>Update #1</h2>
<p>Here is a simple <a href="https://github.com/kmkolasinski/deep-learning-notes/tree/master/tf-oversampling" rel="noreferrer">jupyter notebook</a> which implements the above oversampling/undersampling on a toy model. </p> | 2017-11-11 09:40:05.750000+00:00 | 2018-10-23 12:30:32.873000+00:00 | 2018-10-23 12:30:32.873000+00:00 | null | 47,236,465 | <p>I would like to ask if current API of datasets allows for implementation of oversampling algorithm? I deal with highly imbalanced class problem. I was thinking that it would be nice to oversample specific classes during dataset parsing i.e. online generation. I've seen the implementation for rejection_resample function, however this removes samples instead of duplicating them and its slows down batch generation (when target distribution is much different then initial one). The thing I would like to achieve is: to take an example, look at its class probability decide if duplicate it or not. Then call <code>dataset.shuffle(...)</code> <code>dataset.batch(...)</code> and get iterator. The best (in my opinion) approach would be to oversample low probable classes and subsample most probable ones. I would like to do it online since it's more flexible. </p> | 2017-11-11 09:40:05.750000+00:00 | 2019-06-21 09:00:11.630000+00:00 | 2018-03-20 21:28:59.557000+00:00 | python|tensorflow|sampling|tensorflow-datasets | ['https://github.com/tensorflow/tensorflow/issues/14451#event-1336837542', 'https://arxiv.org/abs/1710.05381', 'https://github.com/kmkolasinski/deep-learning-notes/tree/master/tf-oversampling'] | 3 |
67,388,699 | <p>Kolda et al. proposed the <a href="https://arxiv.org/pdf/1302.6636.pdf" rel="nofollow noreferrer">BTER model</a> (2013) that can generate random graphs with prescribed degree and clustering coefficient distribution (and thus prescribed global clustering index). It seems a bit more complicated than my model (see above), but maybe it's faster or generates less biased graphs. (But to be honest, I assume that my model doesn't generate severely biased graphs, neither, but essentially random graphs.)</p> | 2021-05-04 16:35:46.640000+00:00 | 2021-05-04 17:17:34.360000+00:00 | 2021-05-04 17:17:34.360000+00:00 | null | 27,526,175 | <p>I'm working on simulations of large-scale neuronal networks, for which I need to generate random graphs that represent the network topology.</p>
<p>I'd like to be able to specify the following properties of these graphs:</p>
<ul>
<li>Number of nodes, <em>N</em> (~=1000-10000)</li>
<li>Average probability of a connection between any two given nodes, <em>p</em> (~0.01-0.2)</li>
<li>Global clustering coefficient, <em>C</em> (~0.1-0.5)</li>
</ul>
<p>Ideally, the random graphs should be drawn uniformly from the set of all possible graphs that satisfy these user-specified criteria.</p>
<p>At the moment I'm using a very crude random diffusion approach where I start out with an Erdos-Renyi random network with the desired size and global connection probability, then on each step I randomly rewire some fraction of the edges. If the rewiring got me closer to the desired <em>C</em> then I keep the rewired network into the next iteration.</p>
<p>Here's my current Python implementation:</p>
<pre><code>import igraph
import numpy as np
def generate_fixed_gcc(n, p, target_gcc, tol=1E-3):
"""
Creates an Erdos-Renyi random graph of size n with a specified global
connection probability p, which is then iteratively rewired in order to
achieve a user- specified global clustering coefficient.
"""
# initialize random graph
G_best = igraph.Graph.Erdos_Renyi(n=n, p=p, directed=True, loops=False)
loss_best = 1.
n_edges = G_best.ecount()
# start with a high rewiring rate
rewiring_rate = n_edges
n_iter = 0
while loss_best > tol:
# operate on a copy of the current best graph
G = G_best.copy()
# adjust the number of connections to rewire according to the current
# best loss
n_rewire = min(max(int(rewiring_rate * loss_best), 1), n_edges)
G.rewire(n=n_rewire)
# compute the global clustering coefficient
gcc = G.transitivity_undirected()
loss = abs(gcc - target_gcc)
# did we improve?
if loss < loss_best:
# keep the new graph
G_best = G
loss_best = loss
gcc_best = gcc
# increase the rewiring rate
rewiring_rate *= 1.1
else:
# reduce the rewiring rate
rewiring_rate *= 0.9
n_iter += 1
# get adjacency matrix as a boolean numpy array
M = np.array(G_best.get_adjacency().data, dtype=np.bool)
return M, n_iter, gcc_best
</code></pre>
<p>This is works OK for small networks (<em>N</em> < 500), but it quickly becomes intractable as the number of nodes increases. It takes on the order of about 20 sec to generate a 200 node graph, and several days to generate a 1000 node graph.</p>
<p>Can anyone suggest an efficient way to do this?</p> | 2014-12-17 12:58:16.260000+00:00 | 2021-05-04 17:17:34.360000+00:00 | 2014-12-17 15:37:55.547000+00:00 | python|numpy|random|graph-theory|igraph | ['https://arxiv.org/pdf/1302.6636.pdf'] | 1 |
27,565,401 | <p>Having done a bit of reading, it looks as though the best solution might be the generalized version of Gleeson's algorithm presented in <a href="http://arxiv.org/abs/1301.6802" rel="nofollow noreferrer">this paper</a>. However, I still don't really understand how to implement it, so for the time being I've been working on <a href="http://dx.doi.org/10.1186/1471-2105-10-405" rel="nofollow noreferrer">Bansal et al's algorithm</a>.</p>
<p>Like my naive approach, this is a Markov chain-based method that uses random edge swaps, but unlike mine it specifically targets 'triplet motifs' within the graph for rewiring:</p>
<p><img src="https://i.stack.imgur.com/8SfLA.png" alt="enter image description here"></p>
<p>Since this will have a greater tendency to introduce triangles, it will therefore have a greater impact on the clustering coefficient. At least in the case of undirected graphs, the rewiring step is also guaranteed to preserve the degree sequence. Again, on every rewiring iteration the new global clustering coefficient is measured, and the new graph is accepted if the GCC got closer to the target value.</p>
<p>Bansal et al actually <a href="http://sbansal.com/ClustRNet/" rel="nofollow noreferrer">provided a Python implementation</a>, but for various reasons I ended up writing my own version, <a href="https://gist.github.com/alimuldal/c7e2905f455cac3f01ed" rel="nofollow noreferrer">which you can find here</a>.</p>
<h2>Performance</h2>
<p>The Bansal approach takes just over half the number of iterations and half the total time compared with my naive diffusion method:</p>
<p><img src="https://i.stack.imgur.com/CAKDB.png" alt="enter image description here"></p>
<p>I was hoping for bigger gains, but a 2x speedup is better than nothing.</p>
<h2>Generalizing to directed graphs</h2>
<p>One remaining challenge with the Bansal method is that my graphs are directed, whereas Bansal et al's algorithm is only designed to work on undirected graphs. With a directed graph, the rewiring step is no longer guaranteed to preserve the in- and out-degree sequences.</p>
<hr>
<h2>Update</h2>
<p>I've just figured out how to generalize the Bansal method to preserve both the in- and out-degree sequences for directed graphs. The trick is to select motifs where the two outward edges to be swapped have opposite directions (the directions of the edges between {x, y1} and {x, y2} don't matter):</p>
<p><img src="https://i.stack.imgur.com/Tychd.png" alt="enter image description here"></p>
<p>I've also made some more optimizations, and the performance is starting to look a bit more respectable - it takes roughly half the number of iterations and half the total time compared with the diffusion approach. I've updated the graphs above with the new timings.</p> | 2014-12-19 11:42:38.723000+00:00 | 2014-12-19 20:15:07.580000+00:00 | 2014-12-19 20:15:07.580000+00:00 | null | 27,526,175 | <p>I'm working on simulations of large-scale neuronal networks, for which I need to generate random graphs that represent the network topology.</p>
<p>I'd like to be able to specify the following properties of these graphs:</p>
<ul>
<li>Number of nodes, <em>N</em> (~=1000-10000)</li>
<li>Average probability of a connection between any two given nodes, <em>p</em> (~0.01-0.2)</li>
<li>Global clustering coefficient, <em>C</em> (~0.1-0.5)</li>
</ul>
<p>Ideally, the random graphs should be drawn uniformly from the set of all possible graphs that satisfy these user-specified criteria.</p>
<p>At the moment I'm using a very crude random diffusion approach where I start out with an Erdos-Renyi random network with the desired size and global connection probability, then on each step I randomly rewire some fraction of the edges. If the rewiring got me closer to the desired <em>C</em> then I keep the rewired network into the next iteration.</p>
<p>Here's my current Python implementation:</p>
<pre><code>import igraph
import numpy as np
def generate_fixed_gcc(n, p, target_gcc, tol=1E-3):
"""
Creates an Erdos-Renyi random graph of size n with a specified global
connection probability p, which is then iteratively rewired in order to
achieve a user- specified global clustering coefficient.
"""
# initialize random graph
G_best = igraph.Graph.Erdos_Renyi(n=n, p=p, directed=True, loops=False)
loss_best = 1.
n_edges = G_best.ecount()
# start with a high rewiring rate
rewiring_rate = n_edges
n_iter = 0
while loss_best > tol:
# operate on a copy of the current best graph
G = G_best.copy()
# adjust the number of connections to rewire according to the current
# best loss
n_rewire = min(max(int(rewiring_rate * loss_best), 1), n_edges)
G.rewire(n=n_rewire)
# compute the global clustering coefficient
gcc = G.transitivity_undirected()
loss = abs(gcc - target_gcc)
# did we improve?
if loss < loss_best:
# keep the new graph
G_best = G
loss_best = loss
gcc_best = gcc
# increase the rewiring rate
rewiring_rate *= 1.1
else:
# reduce the rewiring rate
rewiring_rate *= 0.9
n_iter += 1
# get adjacency matrix as a boolean numpy array
M = np.array(G_best.get_adjacency().data, dtype=np.bool)
return M, n_iter, gcc_best
</code></pre>
<p>This is works OK for small networks (<em>N</em> < 500), but it quickly becomes intractable as the number of nodes increases. It takes on the order of about 20 sec to generate a 200 node graph, and several days to generate a 1000 node graph.</p>
<p>Can anyone suggest an efficient way to do this?</p> | 2014-12-17 12:58:16.260000+00:00 | 2021-05-04 17:17:34.360000+00:00 | 2014-12-17 15:37:55.547000+00:00 | python|numpy|random|graph-theory|igraph | ['http://arxiv.org/abs/1301.6802', 'http://dx.doi.org/10.1186/1471-2105-10-405', 'http://sbansal.com/ClustRNet/', 'https://gist.github.com/alimuldal/c7e2905f455cac3f01ed'] | 4 |
66,209,577 | <p>You can determine which part of the image are 'important' for the classification by creating a <strong>class activation map</strong>. Have a look at <a href="https://github.com/ramprs/grad-cam/" rel="nofollow noreferrer">Grad-CAM: Gradient-weighted Class Activation Mapping</a> to see an implementation on github.</p>
<p>Review this paper, <a href="https://arxiv.org/pdf/1311.2901.pdf" rel="nofollow noreferrer">Visualizing and Understanding Convolutional Networks</a>, which strives to understand why a particular large CNN might perform well and how to improve it.</p> | 2021-02-15 14:10:41.540000+00:00 | 2021-02-15 14:10:41.540000+00:00 | null | null | 66,206,982 | <p>Is there a way to measure how hard an image is to classify (instance hardness), and also a way to measure which parts of the image are hard?</p>
<p>I am currently exploring CNNs and the question of why some images are more difficult to classify than others. In general, it can be said that class overlap is the decisive factor. But now I was wondering whether this can also be quantified concretely in an image regarding different parts/segments/patches of an image and thus someone can determine which parts of the image are so difficult for a classifier?</p> | 2021-02-15 11:12:03.567000+00:00 | 2021-02-15 14:10:41.540000+00:00 | 2021-02-15 11:15:57.017000+00:00 | machine-learning | ['https://github.com/ramprs/grad-cam/', 'https://arxiv.org/pdf/1311.2901.pdf'] | 2 |
31,214,306 | <p>Here is an implementation of common mathematical functions for BigDecimal <a href="http://arxiv.org/abs/0908.3030v2" rel="nofollow">http://arxiv.org/abs/0908.3030v2</a>.</p>
<p>It contains implementation of pow which supports what you need.</p> | 2015-07-03 21:18:59.220000+00:00 | 2015-07-03 21:18:59.220000+00:00 | null | null | 31,214,152 | <p>How can you maintain Excel level precision while performing computations in Java. In most cases using BigDecimal would solve the issue but what about when using BigDecimals in complex calculations.</p>
<p>For example, the formula for present value being:</p>
<p>PV = P / (1 + r)^n</p>
<p>now if each of these components are BigDecimal values, BigDecimal does not provide a mechanism to raise a BigDecimal to a fraction power.</p>
<p>Thanks in advance.</p> | 2015-07-03 21:00:37.833000+00:00 | 2015-07-03 21:18:59.220000+00:00 | null | java|excel|floating-point-precision | ['http://arxiv.org/abs/0908.3030v2'] | 1 |
59,653,918 | <p>It seems you have a collection of dialogues, and want to classify each turn in the dialogue into some number of classes.</p>
<p>A similar, well-studied problem is Dialogue Act Classification. Dialogue act classification is the task of classifying an utterance with respect to the function it serves in a dialogue, i.e. the act the speaker is performing. Dialogue acts are a type of speech acts (for Speech Act Theory, see <a href="http://www.hup.harvard.edu/catalog.php?isbn=9780674411524" rel="nofollow noreferrer">Austin (1975)</a> and <a href="https://www.cambridge.org/core/books/speech-acts/D2D7B03E472C8A390ED60B86E08640E7" rel="nofollow noreferrer">Searle (1969)</a>).</p>
<p>The paper "Dialogue Act Sequence Labeling using Hierarchical encoder with CRF"](<a href="https://arxiv.org/abs/1709.04250" rel="nofollow noreferrer">https://arxiv.org/abs/1709.04250</a>) has code available: <a href="https://github.com/YanWenqiang/HBLSTM-CRF" rel="nofollow noreferrer">GitHub</a>. It is academic code, and not the clearest. It is unclear what version of TF they use.</p>
<p>RE: batch size - they use <code>batchSize = 2</code> (<a href="https://github.com/YanWenqiang/HBLSTM-CRF/blob/master/HBLSTM-CRF.py#L92" rel="nofollow noreferrer">line</a>). The dialogues have variable length utterances.</p>
<p>I think you should read the paper though, there are lots of relevant quotes, like</p>
<blockquote>
<p>We propose a hierarchical recurrent encoder, where the first encoder operates at the utterance level, encoding each word in each utterance, and the
second encoder operates at the conversation level, encoding
each utterance in the conversation, based on the representations of the previous encoder. These two encoders make sure
that the output of the second encoder capture the dependencies among utterances.</p>
</blockquote> | 2020-01-08 21:15:38.713000+00:00 | 2020-01-08 21:15:38.713000+00:00 | null | null | 59,620,260 | <p>I am building a text classification model in tensorflow (experimenting with different architectures from BiLSTM to 1DConvnet, etc.) My data is structured as follows:</p>
<p>1 corpus of documents</p>
<p>~ 100 documents made of independent but contextually similar multi-party conversation transcriptions (time series). </p>
<p>~ 200 utterances per document that are labeled (same labeling convention for all documents</p>
<p>In other words, it looks like this (label structure looks the same, but with one int per string):</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>data = [
[
'hello how are you'
'i am good'
'whats the weather today'
...,
],
[
'how long have you had that cough'
'roughly 2 weeks'
'anything else'
...,
],
...,
]
</code></pre>
</div>
</div>
</p>
<p>Right now, I am feeding my data into my models as a flat list of strings (data) and ints (labels) by flattening all documents. This works, but I wonder if this is the best way to handle my data. IIUC, using any kind of RNN means that my model is 'remembering' the previous data. However as each document contains separate conversations, text from document 1 does not effect text from document 2, and so on. Intuitively, as each document is an independent conversation, I want the model to 'remember' what happened in the beginning of a conversation at the end of a conversation, but to 'forget' when moving to the next. Is this intuition correct?</p>
<p>What is the best practice in this scenario? Is there a way to feed in 1 document at a time (i.e. setting batch size to document length?)? Would this make a difference, or is a flat list the way to go?</p>
<p>Thanks.</p> | 2020-01-06 23:21:59.723000+00:00 | 2020-01-08 21:15:38.713000+00:00 | null | python|tensorflow|keras|deep-learning|nlp | ['http://www.hup.harvard.edu/catalog.php?isbn=9780674411524', 'https://www.cambridge.org/core/books/speech-acts/D2D7B03E472C8A390ED60B86E08640E7', 'https://arxiv.org/abs/1709.04250', 'https://github.com/YanWenqiang/HBLSTM-CRF', 'https://github.com/YanWenqiang/HBLSTM-CRF/blob/master/HBLSTM-CRF.py#L92'] | 5 |
66,652,733 | <p>Transformers were originally proposed, as the title of "Attention is All You Need" implies, as a more efficient seq2seq model ablating the RNN structure commonly used til that point.</p>
<p>However in pursuing this efficiency, a single headed attention had reduced descriptive power compared to RNN based models. Multiple heads were proposed to mitigate this, allowing the model to learn multiple lower-scale feature maps as opposed to one all-encompasing map:</p>
<blockquote>
<p>In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions [...] This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention...</p>
<ul>
<li><a href="https://arxiv.org/abs/1706.03762" rel="nofollow noreferrer">Attention is All You Need</a> (2017)</li>
</ul>
</blockquote>
<p>As such, <a href="https://ai.stackexchange.com/a/26840/23503"><strong>multiple attention heads</strong></a> in a single layer in a transformer <strong>is analogous to multiple kernels in a single layer in a CNN</strong>: they have the same architecture, and operate on the same feature-space, but since they are separate 'copies' with different sets of weights, they are hence 'free' to learn different functions.</p>
<p>In a CNN this may correspond to different definitions of visual features, and in a Transformer this may correspond to different definitions of relevance:<sup>1</sup></p>
<p>For example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Architecture</th>
<th>Input</th>
<th>(Layer 1)<br>Kernel/Head 1</th>
<th>(Layer 1)<br>Kernel/Head 2</th>
</tr>
</thead>
<tbody>
<tr>
<td>CNN</td>
<td>Image</td>
<td>Diagonal <a href="https://en.wikipedia.org/wiki/Kernel_(image_processing)#Details" rel="nofollow noreferrer">edge-detection</a></td>
<td>Horizontal edge-detection</td>
</tr>
<tr>
<td>Transformer</td>
<td>Sentence</td>
<td>Attends to next word</td>
<td>Attends from verbs to their direct objects</td>
</tr>
</tbody>
</table>
</div><hr />
<p><strong>Notes:</strong></p>
<sup>
<ol>
<li>There is no guarantee that these are human interpretable, but in many popular architectures they do map accurately onto linguistic concepts:
<blockquote>
<p>While no single head performs well at many relations, we find that particular heads correspond remarkably well to particular relations. For example, we find heads that find direct objects of verbs, determiners of nouns, objects of prepositions, and objects of possesive pronouns...</p>
<ul>
<li><a href="https://www.aclweb.org/anthology/W19-4828/" rel="nofollow noreferrer">What Does BERT Look at? An Analysis of BERT’s Attention</a> (2019)</li>
</ul>
</blockquote>
</li>
</ol>
</sup> | 2021-03-16 09:49:50.173000+00:00 | 2021-03-18 09:24:48.353000+00:00 | 2021-03-18 09:24:48.353000+00:00 | null | 66,244,123 | <p>I am trying to understand why transformers use multiple attention heads. I found the following <a href="https://towardsdatascience.com/simple-explanation-of-transformers-in-nlp-da1adfc5d64f" rel="nofollow noreferrer">quote</a>:</p>
<blockquote>
<p>Instead of using a single attention function where the attention can
be dominated by the actual word itself, transformers use multiple
attention heads.</p>
</blockquote>
<p>What is meant by "the attention being dominated by the word itself" and how does the use of multiple heads address that?</p> | 2021-02-17 14:38:34.797000+00:00 | 2021-03-18 09:24:48.353000+00:00 | 2021-03-17 12:20:46.617000+00:00 | nlp|transformer-model|attention-model | ['https://arxiv.org/abs/1706.03762', 'https://ai.stackexchange.com/a/26840/23503', 'https://en.wikipedia.org/wiki/Kernel_(image_processing)#Details', 'https://www.aclweb.org/anthology/W19-4828/'] | 4 |
49,672,770 | <p>You are probably talking about <a href="https://arxiv.org/pdf/1508.02096.pdf" rel="nofollow noreferrer" title="this very cool paper">this paper</a>. They don't seem to have released their code..</p>
<p>I don't think you would have to "undo the word embedding", because you can do the training simultaneously on two objectives :</p>
<ul>
<li>For each word, one loss for the characters produced, with respect to the true characters of the word. CE loss looks good for this.</li>
<li>One loss for the "word embeddings" produced in the decoder (by the inner LSTM), with respect to the word embeddings produced in the encoder (by the outer LSTM applied on characters). I'm thinking MSE loss for example.</li>
</ul>
<p>(Does it make sense?)</p>
<p>Out of curiosity, did you manage to implement this idea ? I'm going to try it in Pytorch</p> | 2018-04-05 12:45:57.267000+00:00 | 2018-04-05 12:45:57.267000+00:00 | null | null | 37,900,366 | <p>I would like to build an LSTM with a special word embedding, but I have some questions about how this would work. </p>
<p>As you might know, some LSTMs operate on characters, so it is characters in, characters out. I would like to do the same, with an abstraction on words to learn a robust embedding on them with a nested LSTM to be resistant to slight character-level errors. </p>
<p>So, a tiny LSTM would unroll on every letter of a word, then this would create an embedding of the word. Each embedded word in a sentence would then be fed as an input to a higher level LSTM, which would operate on a word level at every time step, rather than on characters. </p>
<p>Questions:
- I cannot find anymore the research paper that talked about that. If you know of what I talk about, I would like to put a name on what I want to do.
- Does some TensorFlow open-source code already exist for that?
- Else, do you have an idea on how to implement that? The output of the neural network might be harder to deal with, as we would need to undo the word embedding for the training on characters with an output nested LSTM. The whole thing should be trained once as a single unit (workflow: LSTM chars in, LSTM on words, LSTM chars out). </p>
<p>I guess that <code>rnn_cell.MultiRNNCell</code> would stack LSTMs on top of each other rather than nesting them. </p>
<p>Else would you recommend training the embeddings (in and out) as an autoencoder outside the main LSTM ?</p> | 2016-06-18 18:34:22.863000+00:00 | 2018-04-05 12:45:57.267000+00:00 | null | python|tensorflow|deep-learning|lstm|word-embedding | ['https://arxiv.org/pdf/1508.02096.pdf'] | 1 |
33,101,342 | <p><a href="http://www.cs.berkeley.edu/~rbg/" rel="nofollow">Dr Ross Girshik</a> has done a lot of work on object detection. You can learn a lot from his detailed git on <a href="https://github.com/rbgirshick/fast-rcnn" rel="nofollow">fast RCNN</a>: you should be able to find a caffe branch there, with a demo. I did not use it myself, but it seems very comprehensible.</p>
<p>Another direction you might find interesting is <a href="http://lsda.berkeleyvision.org/" rel="nofollow">LSDA</a>: using weak supervision to train object detection for many classes.</p>
<p>BTW, have you looked into <a href="http://arxiv.org/abs/1506.01497" rel="nofollow">faster-rcnn</a>?</p> | 2015-10-13 11:21:26.903000+00:00 | 2015-10-13 12:06:02.930000+00:00 | 2015-10-13 12:06:02.930000+00:00 | null | 33,101,145 | <p>After several month working with <a href="/questions/tagged/caffe" class="post-tag" title="show questions tagged 'caffe'" rel="tag">caffe</a>, I've been able to train my own models successfully. For example further than my own models, I've been able to train ImageNet with 1000 classes.</p>
<p>In my project now, I'm trying to extract the region of my interest class. After that I've compiled and run the demo of <a href="https://github.com/rbgirshick/fast-rcnn" rel="nofollow"><strong>Fast R-CNN</strong></a> and it works ok, but the sample models contains only 20 classes and I'd like to have more classes, for example all of them.</p>
<p>I've already downloaded the <a href="http://image-net.org/download-bboxes" rel="nofollow"><strong>bounding boxes</strong></a> of ImageNet, with the real images.</p>
<p>Now, I've gone blank, I can't figure out the next steps and there's not a documentation of how to do it. The only thing I've found is how to train the INRIA person model, and they provide dataset + annotations + python script.</p>
<p>My questions are:</p>
<ul>
<li>Is there maybe any tutorial or guide that I've missed?</li>
<li>Is there already a model trained with 1000 classes able to classify images and extract the bounding boxes?</li>
</ul>
<p>Thank you very much in advance.</p>
<p>Regards.</p>
<p>Rafael.</p> | 2015-10-13 11:11:21.037000+00:00 | 2017-06-27 13:00:42.340000+00:00 | 2017-06-27 13:00:42.340000+00:00 | neural-network|computer-vision|deep-learning|caffe|conv-neural-network | ['http://www.cs.berkeley.edu/~rbg/', 'https://github.com/rbgirshick/fast-rcnn', 'http://lsda.berkeleyvision.org/', 'http://arxiv.org/abs/1506.01497'] | 4 |
40,620,050 | <p>Matrix factorization assumes that the "latent factors" such as the preference for italian food of a user and the italieness of the item food is implicated by the ratings in the matrix.</p>
<p>So the whole Problem kind of transforms into a matrix reconstruction problem for which a lot of different solutions exist. A simple, maybe slow, solution is (besides ALS and some other possibilities of Matrix reconstruction) approximating the matrix using a gradient descend algorithm. I recommend this short article <a href="http://www.columbia.edu/~jwp2128/Teaching/W4721/papers/ieeecomputer.pdf" rel="nofollow noreferrer">ieee article about recommender systems</a>.</p>
<p>Extracting the latent factors is a different problem.</p>
<p>So an implementation of GDM could look like:</p>
<pre><code>public void learnGDM(){
//traverse learnSet
for(int repeat = 0; repeat < this.steps; repeat++){
for (int i = 0; i < this.learnSet.length; i++){
for (int j = 0; j < this.learnSet[0].length; j++){
if(this.learnSet[i][j] > 0.0d){
double Rij = this.learnSet[i][j];
for(int f = 0 ; f <= latentFactors; f++){
double error = Rij - dotProduct(Q.getRow(i), P.getRow(j));/*estimated_Rij;*/
//ieee computer 1.pdf
double qif = Q.get(i, f);
double pif = P.get(j, f);
double Qvalue = qif + gradientGamma * (error * pif - gradientLambda * qif);
double Pvalue = pif + gradientGamma * (error * qif - gradientLambda * pif);
Q.set(i,f, Qvalue);
P.set(j, f, Pvalue);
}
}
}
}
//check global error
if(checkGlobalError() < 0.001d){
System.out.println("took" + repeat + "steps");
break;
}
}
</code></pre>
<p>Where the learnset is a two dimensional Array containing the Ratingmatrix as in the ieee article. The GDM algorithm changes the rating vectors P and Q a bit every iteration so that they approximate the ratings in the ratingmatrix. Then the "not given" ratings can be calculated by the dot product of P and Q. The highest estimations for the not given ratings will then be the recommendations.</p>
<p>So thats it for the start. There are a lot of optimizations and other algorithms or modified versions of GDM that can also be run in parallel.</p>
<p>Some good reads:</p>
<p><a href="http://www.prem-melville.com/publications/recommender-systems-eml2010.pdf" rel="nofollow noreferrer">recommender systems in the Encyclopedia of Machine Learning</a></p>
<p><a href="https://www.csie.ntu.edu.tw/~htlin/mooc/doc/215_present.pdf" rel="nofollow noreferrer">presentation on matrix factorization</a></p>
<p><a href="https://arxiv.org/pdf/1202.1112" rel="nofollow noreferrer">recommender systems</a> <--- big one ^^</p> | 2016-11-15 21:33:21.977000+00:00 | 2016-11-15 21:38:22.223000+00:00 | 2016-11-15 21:38:22.223000+00:00 | null | 40,398,657 | <p>I'm working on a recommender system for restaurants using an item-based collaborative filter in C# 6.0. I want to set up my algorithm to perform as well as possible, so I've done some research on different ways to predict ratings for restaurants the user hasn't reviewed yet.</p>
<p><strong>I'll start with the research I have done</strong></p>
<p>First I wanted to set up a user-based collaborative filter using a pearson correlation between users to be able to see which users fit well together.<br>
The main problem with this was the amount of data required to be able to calculate this correlation. First you needed 4 reviews per 2 users on the same restaurant. But my data is going to be very sparse. It wasn't likely that 2 users would have reviewed the exact same 4 restaurants. I wanted to fix this by widening the match terms (I.e. not matching users on same restaurants, but on a same type of restaurant), but this gave me the problem where it was hard to determine which reviews I would use in the correlation, since a user could have left 3 reviews on a restaurant with the type 'Fast food'. Which of these would fit best with the other user's review on a fast food restaurant?</p>
<p>After more research I concluded that an item-based collaborative filter outperforms an user-based collaborative filter. But again, I encountered the data sparsity issue. In my tests I was successfully able to calculate a prediction for a rating on a restaurant the user hasn't reviewed yet, but when I used the algorithm on a sparse dataset, the results weren't good enough. (Most of the time, a similarity wasn't possible between two restaurants, since no 2 users have rated the same restaurant).<br>
After even more research I found that using a matrix factorization method works well on sparse data.</p>
<p><strong>Now my problem</strong></p>
<p>I have been looking all over the internet for tutorials on using this method, but I don't have any experience in recommender systems and my knowledge on algebra is also limited. I understand the just of the method. You have a matrix where you have 1 side the users and the other side the restaurants. Each cell is the rating the user has given on the restaurant.<br>
The matrix factorization method creates two matrices of this, one with the weight between users and the type of the restaurant, and the other between restaurants and these types. </p>
<p>The thing I just can't figure out is how to calculate the weight between the type of restaurant and the restaurants/users (If I understand matrix factorization correctly). I found dozens of formulas which calculates these numbers, but I can't figure out how to break them down and apply them in my application.</p>
<p>I'll give you an example on how data looks in my application:<br>
In this table U1 stands for a user and R1 stands for a restaurant.
Each restaurant has their own tags (Type of restaurant). I.e. R1 has the tag 'Italian', R2 has 'Fast food', etc.</p>
<pre><code> | R1 | R2 | R3 | R4 |
U1 | 3 | 1 | 2 | - |
U2 | - | 3 | 2 | 2 |
U3 | 5 | 4 | - | 4 |
U4 | - | - | 5 | - |
</code></pre>
<p>Is there anyone who can point me in the right direction or explain how I should use this method on my data? Any help would be greatly appreciated!</p> | 2016-11-03 10:03:02.290000+00:00 | 2020-01-15 23:37:23.083000+00:00 | 2016-11-03 12:30:04.737000+00:00 | c#|algorithm|collaborative-filtering|matrix-factorization | ['http://www.columbia.edu/~jwp2128/Teaching/W4721/papers/ieeecomputer.pdf', 'http://www.prem-melville.com/publications/recommender-systems-eml2010.pdf', 'https://www.csie.ntu.edu.tw/~htlin/mooc/doc/215_present.pdf', 'https://arxiv.org/pdf/1202.1112'] | 4 |
30,157,483 | <p>What you are asking is how to decide whether a given system of coins is <strong>canonical</strong> for the change-making problem. A system is canonical if the greedy algorithm always gives an optimal solution. You can decide whether a system of coins which includes a 1-cent piece is canonical or not in a finite number of steps. Details, and more efficient algorithms in certain cases, can be found in <a href="http://arxiv.org/pdf/0809.0400.pdf" rel="nofollow">http://arxiv.org/pdf/0809.0400.pdf</a>.</p> | 2015-05-10 22:56:31.350000+00:00 | 2015-05-10 22:56:31.350000+00:00 | null | null | 30,138,887 | <p>The Problem is making n cents change with quarters, dimes, nickels, and pennies, and using the least total number of coins. In the particular case where the four denominations are quarters,dimes, nickels, and pennies, we have c1 = 25, c2 = 10, c3 = 5, and c4 = 1. </p>
<p>If we have <strong>only quarters, dimes, and pennies (and no nickels)</strong> to use,
the greedy algorithm would make change for <strong>30 cents using six coins</strong>—a quarter and five pennies—whereas we could have used <strong>three coins</strong>, namely, three dimes.</p>
<p>Given a set of denominations, how can we say whether greedy approach creates an optimal solution?</p> | 2015-05-09 10:35:33.377000+00:00 | 2015-05-10 22:56:31.350000+00:00 | 2015-05-09 11:26:11.783000+00:00 | algorithm|dynamic-programming|greedy | ['http://arxiv.org/pdf/0809.0400.pdf'] | 1 |
58,256,783 | <p>The order of the layers effects the convergence of your model and hence your results. Based on the Batch Normalization <a href="https://arxiv.org/pdf/1502.03167.pdf" rel="nofollow noreferrer">paper</a>, the author suggests that the Batch Normalization should be implemented before the activation function. Since Dropout is applied after computing the activations. Then the right order of layers are:</p>
<ul>
<li>Dense or Conv</li>
<li>Batch Normalization</li>
<li>Activation</li>
<li>Droptout.</li>
</ul>
<p>In code using keras, here is how you write it sequentially:</p>
<pre><code>model = Sequential()
model.add(Dense(n_neurons, input_shape=your_input_shape, use_bias=False)) # it is important to disable bias when using Batch Normalization
model.add(BatchNormalization())
model.add(Activation('relu')) # for example
model.add(Dropout(rate=0.25))
</code></pre>
<p>Batch Normalization helps to avoid Vanishing/Exploding Gradients when training your model. Therefore, it is specially important if you have many layers. You can read the provided paper for more details.</p> | 2019-10-06 11:27:51.540000+00:00 | 2019-10-06 11:27:51.540000+00:00 | null | null | 58,256,610 | <p>I was building a neural network model and my question is that by any chance the ordering of the dropout and batch normalization layers actually affect the model?
Will putting the dropout layer before batch-normalization layer (or vice-versa) actually make any difference to the output of the model if I am using ROC-AUC score as my metric of measurement.</p>
<p>I expect the output to have a large (ROC-AUC) score and want to know that will it be affected in any way by the ordering of the layers.</p> | 2019-10-06 11:03:25.187000+00:00 | 2019-10-06 16:44:32.417000+00:00 | 2019-10-06 16:44:32.417000+00:00 | python-3.x|machine-learning|deep-learning|batch-normalization|dropout | ['https://arxiv.org/pdf/1502.03167.pdf'] | 1 |
44,042,525 | <p>Going Deeper With Convolution (or everything you need to know GoogLeNet if you know how a CNN works and is built) : <a href="http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf" rel="nofollow noreferrer">http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf</a></p>
<p>Rethinking the Inception Architecture for Computer Vision (improvements) : <a href="https://arxiv.org/pdf/1512.00567.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1512.00567.pdf</a></p>
<p>Hope it helps</p> | 2017-05-18 08:31:08.093000+00:00 | 2017-05-18 08:31:08.093000+00:00 | null | null | 44,034,098 | <p>I have a memoir to write and the main topic is comparing Google's deep learning algorithms for image recognition with the one that my teacher has made + create my own (based on the one that my teacher made) with incremental neural network and make a benchmark.</p>
<p>Can anyone give me some resources where I can learn more about the implementation of Google's deep learning neural network for image recognition ?</p> | 2017-05-17 20:19:51.460000+00:00 | 2017-05-27 12:57:32.940000+00:00 | 2017-05-27 12:57:32.940000+00:00 | neural-network|deep-learning|image-recognition | ['http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf', 'https://arxiv.org/pdf/1512.00567.pdf'] | 2 |
40,566,228 | <p>Because you have variable-sized patches, there are a couple of ways you can handle the classification of these patches using a SVM. These have their advantages and disadvantages so you will have to decide what you think is best. Given that you have decided to choose a patch of size <code>M x N</code> for your images to be submitted into your SVM for classification, you can try one of the two following approaches:</p>
<h1>Resize the input image patches</h1>
<p>For each of your images at test time, resize them so that they all match the size of <code>M x N</code>, then run through the SVM classification pipeline to determine which class that image belongs to. The advantages of this are that the only information you are losing is due to the information lost when subsampling the images. However, the disadvantage is that if the image is smaller than the target patch size of <code>M x N</code> you will introduce bogus information when upsampling to match the target patch size. This kind of thing has been seen before especially in Deep Learning. Specifically, <a href="https://arxiv.org/pdf/1506.01497v3.pdf" rel="nofollow"><strong>Region Proposal Networks by Ren et al.</strong></a> first take a look at what patches in a larger image are candidates to have an object or something worth taking a look at in the image, they then <strong>resize</strong> the patches to match the input layer into their neural network (convolutional btw) then proceed with the classification.</p>
<h1>Search for patches over multiple scales</h1>
<p>Another way is to keep the image size intact but using patch sizes of <code>M x N</code>, do a sliding window scheme where you extract overlapping patches of size <code>M x N</code>, submit these to your SVM then for each centre of each overlapping patch, determine what the class of that patch would be. You would do this over multiple scales then have a voting procedure where the most occurring class over the entire image is the class of interest. Something similar to this was seen in <a href="https://arxiv.org/pdf/1312.6229v4.pdf" rel="nofollow"><strong>Semenet et al. for their Overfeat classification engine</strong></a> - also using convolutional neural networks. The advantage of this is that you don't lose any information in that you are using all (if not most) of the image information when classifying an object. The disadvantage is the amount of computation time required - specifically, the number of scales, the amount of overlap between windows and the patch size itself are all hyperparameters that you need to determine for the most optimal performance. This approach also assumes that the patch size is smaller than the image in question when scanning. You will have to be cognizant and choose patch sizes that are smaller than the largest image you have in your training dataset.</p>
<h1>If I can recommend....</h1>
<p>Because you are doing image classification, the algorithms that have the best performance in classification and for the sheer speed at test time would be convolutional neural networks. I would consider looking at those rather than using SVMs for performance. As a start, take a look at the <a href="https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf" rel="nofollow"><strong>AlexNet pipeline by Krizhevsky et al.</strong></a> as a start. This was <em>the</em> seminal work and how convolutional neural networks was placed on the map for computer vision tasks, such as classification, detection and so on.</p> | 2016-11-12 18:28:11.873000+00:00 | 2016-11-12 18:48:25.850000+00:00 | 2016-11-12 18:48:25.850000+00:00 | null | 40,564,956 | <p>I want to train a classifier to classify between "person with weapon" and "person without weapon". weapon can contain any weapon like revolver or assault riffle. <br>
I have images with bounding boxes of weapons in the images. Images are of different sizes. <br>
<strong>What I want to do?</strong><br>
I want to train SVM classifier using the raw image patches obtained using bounding box coordinates of weapons. For "person without weapon" i want to pass whole raw image as feature vector to SVM. <br>
<strong>Limitations:</strong><br>
Each bounding box is of different size, that means a weapon of different size. I cant use PCA for these bounding boxes because i think it may result into loss of information because there are 3 different types of weapons with different sizes in images.Some bounding boxes cover almost whole image. So first i have to downscale image and bounding box because otherwise my memory runs out if i take whole image for PCA. <br>
<strong>Question:</strong><br>
How can i train SVM using variable sized feature vector? To put in another way, How can I make all feature vectors of same size without losing information?</p> | 2016-11-12 16:22:43.300000+00:00 | 2016-11-12 18:51:10.190000+00:00 | 2016-11-12 18:51:10.190000+00:00 | matlab|image-processing|machine-learning|computer-vision|svm | ['https://arxiv.org/pdf/1506.01497v3.pdf', 'https://arxiv.org/pdf/1312.6229v4.pdf', 'https://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf'] | 3 |
35,368,570 | <p>A sentence is composed words, so you can indeed predict the next sentence by predicting words sequentially. There are models, such as the one described in <a href="http://arxiv.org/abs/1507.07998" rel="nofollow">this</a> paper, that build embeddings for entire paragraphs, which can be useful for your purpose. Of course there is <a href="http://arxiv.org/abs/1506.05869" rel="nofollow">Neural Conversational Model</a> work that probably directly fits your need. TensorFlow doesn't ship with working examples of these models, but the recurrent models that come with TensorFlow should give you a good starting point for implementing them.</p> | 2016-02-12 17:18:48.673000+00:00 | 2016-02-12 17:18:48.673000+00:00 | null | null | 35,366,139 | <p>I'd like to build a conversational modal that can predict a sentence using the previous sentences using TensorFlow LSTMs . The example provided in TensorFlow tutorial can be used to predict the next word in a sentence .</p>
<p><a href="https://www.tensorflow.org/versions/v0.6.0/tutorials/recurrent/index.html" rel="nofollow noreferrer">https://www.tensorflow.org/versions/v0.6.0/tutorials/recurrent/index.html</a></p>
<pre class="lang-py prettyprint-override"><code>lstm = rnn_cell.BasicLSTMCell(lstm_size)
# Initial state of the LSTM memory.
state = tf.zeros([batch_size, lstm.state_size])
loss = 0.0
for current_batch_of_words in words_in_dataset:
# The value of state is updated after processing each batch of words.
output, state = lstm(current_batch_of_words, state)
# The LSTM output can be used to make next word predictions
logits = tf.matmul(output, softmax_w) + softmax_b
probabilities = tf.nn.softmax(logits)
loss += loss_function(probabilities, target_words)
</code></pre>
<p>Can I use the same technique to predict the next sentence ? Is there any working example on how to do this ? </p> | 2016-02-12 15:21:58.267000+00:00 | 2018-05-10 22:09:49.337000+00:00 | 2018-05-10 22:09:49.337000+00:00 | tensorflow | ['http://arxiv.org/abs/1507.07998', 'http://arxiv.org/abs/1506.05869'] | 2 |
43,878,139 | <p>The class of CNFs such that every clause is either positive or negative, that is, there are no clauses containing positive and negative literals at the same time, has been called "monotone CNFs" for example by Hans Kleine Buening in his book on propositional logic. A recent paper on monotone CNFs is
<a href="https://arxiv.org/abs/1603.07881" rel="nofollow noreferrer">https://arxiv.org/abs/1603.07881</a>
"Monotone 3-Sat-4 is NP-complete"
It is not hard to see by standard techniques, that monotone 3-SAT (all clauses are either positive or negative, all clauses have length (at most) 3) is NP-complete, and the above paper refines this, by showing NP-completeness for the case where every variable occurs at most four time.</p> | 2017-05-09 19:15:57.530000+00:00 | 2017-05-09 19:15:57.530000+00:00 | null | null | 13,710,435 | <p>Suppose you have an instance of the boolean satisfiability where the formula in given in CNF. Furthermore, each clause contains only positive literals or negative literals. For example:</p>
<pre><code>(a || b) && (!a || !c || !d) && (b || d)
</code></pre>
<p>Does such a boolean formula have a special name? Is there a faster way to test satisfiability with this type of formula, compared to standard CNF formulas?</p> | 2012-12-04 19:29:18.087000+00:00 | 2017-05-09 19:15:57.530000+00:00 | 2012-12-05 02:34:23.520000+00:00 | algorithm|complexity-theory|boolean-logic | ['https://arxiv.org/abs/1603.07881'] | 1 |
73,284,610 | <p>It seems that the default configuration of chrome is to disable the download for security reasons. You may change this in the options. I am attaching a working example based on <a href="https://arxiv.org/" rel="nofollow noreferrer">Arxiv</a> which has safe pdf downloads:</p>
<pre><code>options = webdriver.ChromeOptions()
options.add_experimental_option('prefs', {
"download.default_directory": os.path.join(os.getcwd(),"Downloads"), #Set directory to save your downloaded files.
"download.prompt_for_download": False, #Downloads the file without confirmation.
"download.directory_upgrade": True,
"plugins.always_open_pdf_externally": True #Disable PDF opening.
})
driver = webdriver.Chrome(os.path.join(os.getcwd(),"Downloads","chromedriver"),options=options) #Replace with correct path to your chromedriver executable.
driver.get("https://arxiv.org/list/hep-lat/1902") #Base url
driver.find_elements(By.XPATH,"/html/body/div[5]/div/dl/dt[1]/span/a[2]")[0].click() #Clicks the link that would normally open the PDF, now download. Change to fit your needs
</code></pre> | 2022-08-08 22:19:58.777000+00:00 | 2022-08-08 22:19:58.777000+00:00 | null | null | 73,284,253 | <p>I've been writing a code with python using Selenium that should access a webpage and download a pdf. But, when the driver clicks on the button it generates a new tab with the pdf, and I can't use that URL to download the PDF.
Can anyone help me, please?</p>
<p>(example: if I ask my driver to "get" the PDF "URL", the driver opens the page I was before, the one it had the button that opens the PDF Chrome previewer)</p>
<p>If the problem seems understandable please inform me so I can try to explain it better.</p> | 2022-08-08 21:33:20.607000+00:00 | 2022-08-08 22:19:58.777000+00:00 | null | python|selenium-chromedriver | ['https://arxiv.org/'] | 1 |
37,518,941 | <blockquote>
<p>Does GitHub support the version history for pdf files?</p>
</blockquote>
<p>Not directly, in that <a href="https://stackoverflow.com/a/10971038/6309">it cannot display diff</a>.<br>
You would <a href="https://gist.github.com/thbar/4943276" rel="nofollow noreferrer">need an external diff</a> like <a href="http://www.qtrac.eu/diffpdf.html" rel="nofollow noreferrer"><strong><code>diffpdf</code></strong></a> for that.</p>
<blockquote>
<p>Maybe github can provide the version control for Arxiv.org.</p>
</blockquote>
<p>Right now, GitHub is not used by Arxiv.org. Their <a href="http://arxiv.org/new" rel="nofollow noreferrer">"new" page</a> mentions in 2008:</p>
<blockquote>
<p>We have implemented version control for papers submitted prior to November 1997 in the same way as for papers submitted later.</p>
</blockquote>
<p>In 2011: </p>
<blockquote>
<p><a href="http://arxiv.org/help/bulk_data_s3" rel="nofollow noreferrer">Bulk data available on Amazon S3</a>: The bulk data available for download from Amazon S3 has been extended to include both PDF and source files of the latest versions of all arXiv articles.</p>
</blockquote> | 2016-05-30 06:26:43.573000+00:00 | 2016-05-30 06:26:43.573000+00:00 | 2017-05-23 12:15:54.720000+00:00 | null | 37,517,583 | <p>Many papers are archived in <a href="http://arxiv.org/" rel="nofollow">Arxiv.org</a>. A lot of them might also have version histories. But it seems we can't get the update information for one paper when it is updated. Does github support the version history for pdf files? Maybe github can provide the version control for <a href="http://arxiv.org/" rel="nofollow">Arxiv.org</a>.</p> | 2016-05-30 04:22:30.867000+00:00 | 2016-05-30 06:26:43.573000+00:00 | null | github | ['https://stackoverflow.com/a/10971038/6309', 'https://gist.github.com/thbar/4943276', 'http://www.qtrac.eu/diffpdf.html', 'http://arxiv.org/new', 'http://arxiv.org/help/bulk_data_s3'] | 5 |
65,145,859 | <p>You are asking quite a few questions in your question. I'll try to cover them all, but I do suggest reading the documentation and vignette from <a href="https://cran.r-project.org/web/packages/lme4/" rel="nofollow noreferrer"><code>lme4</code></a> and the <a href="http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#lme4-1" rel="nofollow noreferrer">glmmFAQ</a> page for more information. Also I'd highly recommend searching for these topics on google scholar, as they are fairly well covered.</p>
<p>I'll start somewhere simple</p>
<h1>Note 2 (why is my model singular?)</h1>
<p>Your model is highly singular, because the way you are simulating your data does not indicate any dependency between the data itself. If you wanted to simulate a binomial model you would use <code>g(eta) = X %*% beta</code> to simulate your linear predictor and thus the probability for success. One can then use this probability for simulating the your binary outcome. This would thus be a 2 step process, first using some known <code>X</code> or randomly simulated <code>X</code> given some prior distribution of our choosing. In the second step we would then use <code>rbinom</code> to simulate binary outcome while keeping it dependent on our predictor <code>X</code>.</p>
<p>In your example you are simulating independent <code>X</code> and a <code>y</code> where the probability is independent of <code>X</code> as well. Thus, when we look at the outcome <code>y</code> the probability of success is equal to <code>p=c</code> for all subgroup for some constant <code>c</code>.</p>
<h1>Can someone explain me the actual difference between Method 1 and Method 2? (<code>(1| year:plot)</code> vs <code>(1|year/plot)</code>)</h1>
<p>This is explained in the package vignette <a href="https://cran.r-project.org/web/packages/lme4/vignettes/lmer.pdf" rel="nofollow noreferrer">fitting linear mixed effects models with lme4</a> in the table on page 7.</p>
<ol>
<li><code>(1|year/plot)</code> indicates that we have 2 mixed intercept effects, <code>year</code> and <code>plot</code> and <code>plot</code> is nested within <code>year</code>.</li>
<li><code>(1|year:plot)</code> indicates a single mixed intercept effect, <code>plot</code> nested within <code>year</code>. Eg. we do not include the main effect of <code>year</code>. It would be somewhat similar to having a model without intercept (although less drastic, and interpretation is not destroyed).</li>
</ol>
<p>It is more common to see the first rather than the second, but we could write the first as a function of the second <code>(1|year) + (1|year:plot)</code>.</p>
<h1><strong>Thus: Is it indeed more appropriate to use the cbind-method than the raw binary data?</strong></h1>
<p><code>cbind</code> in a formula is used for binomial data (or multivariate analysis), while for binary data we use the raw vector or <code>0/1</code> indicating success/failure, eg. aggregate binary data (similar to how we'd use <code>glm</code>). If you are uninterested in the random/fixed effect of subplot, you might be able to aggregate your data across plots, and then it would likely make sense. Otherwise stay with you <code>0/1</code> outcome vector indicating either success or failures.</p>
<h1>What would be the correct random model structure and why?</h1>
<p>This is a topic that is extremely hard to give a definitive answer to, and one that is still actively researched. Depending on your statistical paradigm opinions differ greatly.</p>
<h2>Method 1: The classic approach</h2>
<p>Classic mixed modelling is based upon knowledge of the data you are working with. In general there are several "rules of thumb" for choosing these parameters. I've gone through a few in <a href="https://stackoverflow.com/questions/62469170/lmer-or-binomial-glmm/62563069#62563069">my answer here</a>. In general if you are "not interested" in the systematic effect and it can be thought of as a random sample of some population, then it could be a random effect. If it is the population, eg. samples do not change if the process is repeated, then it likely shouldn't.</p>
<p>This approach often yields "decent" choices for those who are new to mixed effect models, but is highly criticized by authors who tend towards methods similar to those we'd use in non-mixed models (eg. visualizing to base our choice and testing for significance).</p>
<h2>Method 2: Using visualization</h2>
<p>If you are able to split your data into independent subgroups and keeping the fixed effect structure a reasonable approach for checking potential random effects is the estimate marginal models (eg. using <code>glm</code>) across these subgroups and seeing if the fixed effects are "normally distributed" between these observations. The function <code>lmList</code> (in <code>lme4</code>) is designed for this specific approach. In linear models we would indeed expect these to be normally distributed, and thus we can get an indication whether a specific grouping "might" be a valid random effect structure. I believe the same is approximately true in the case of generalized linear models, but I lack references. I know that Ben Bolker have advocated for this approach in a prior article of his (the first reference below) that I used during my thesis. However this is only a valid approach for strictly separable data, and the implementation is not robust in the case where factor levels are not shared across all groups.</p>
<p>So in short: If you have the right data, this approach is simple, fast and seemingly highly reliable.</p>
<h2>Method 3: Fitting maximal/minimal models and decreasing/expanding model based on AIC or AICc (or p-value tests or alternative metrics)</h2>
<p>Finally an alternative to use a "step-wise"-like procedure. There are advocates of both starting with maximal and minimal models (I'm certain at least one of my references below talk about problems with both, otherwise check glmmFAQ) and then testing your random effects for their validity. Just like classic regression this is somewhat of a double-edged sword. The reason is both extremely simple to understand and amazingly complex to comprehend.</p>
<p>For this method to be successful you'd have to perform cross-validation or out-of-sample validation to avoid selection bias just like standard models, but unlike standard models sampling becomes complicated because:</p>
<ol>
<li>The fixed effects are conditional on the random structure.</li>
<li>You will need your training and testing samples to be independent</li>
<li>As this is dependent on your random structure, and this is chosen in a step-wise approach it is hard to avoid information leakage in some of your models.</li>
<li>The only certain way to avoid problems here is to define the space
that you will be testing and selecting samples based on the most
restrictive model definition.</li>
</ol>
<p>Next we also have problems with choice of metrics for evaluation. If one is interested in the random effects it makes sense to use AICc (AIC estimate of the conditional model) while for fixed effects it might make more sense to optimize AIC (AIC estimate of the marginal model). I'd suggest checking references to AIC and AICc on glmmFAQ, and be wary since the large-sample results for these may be uncertain outside a very reestrictive set of mixed models (namely "enough independent samples over random effects").</p>
<p>Another approach here is to use p-values instead of some metric for the procedure. But one should likely be even more wary of test on random effects. Even using a Bayesian approach or bootstrapping with incredibly high number of resamples sometimes these are just not very good. Again we need "enough independent samples over random effects" to ensure the accuracy.</p>
<p>The <a href="https://cran.r-project.org/web/packages/DHARMa/index.html" rel="nofollow noreferrer"><code>DHARMA</code></a> provides some very interesting testing methods for mixed effects that might be better suited. While I was working in the area the author was still (seemingly) developing an article documenting the validity of their chosen method. Even if one does not use it for initial selection I can only recommend checking it out and deciding upon whether one believes in their methods. It is by far the most simple approach for a visual test with simple interpretation (eg. almost no prior knowledge is needed to interpret the plots).</p>
<p>A final note on this method would thus be: It is indeed an approach, but one I would personally <strong>not</strong> recommend. It requires either extreme care or the author accepting ignorance of model assumptions.</p>
<h2>Conclusion</h2>
<p>Mixed effect parameter selection is something that is <strong>difficult</strong>. My experience tells me that mostly a combination of method 1 and 2 are used, while method 3 seems to be used mostly by newer authors and these tend to ignore either out-of-sample error (measure model metrics based on the data used for training), ignore independence of samples problems when fitting random effects or restrict themselves to only using this method for testing fixed effect parameters. All 3 do however have some validity. I myself tend to be in the first group, and base my decision upon my "experience" within the field, rule-of-thumbs and the restrictions of my data.</p>
<h2>Your specific problem.</h2>
<p>Given your specific problem I would assume a mixed effect structure of <code>(1|year/plot/subplot)</code> would be the correct structure. If you add autoregressive (time-spatial) effects likely <code>year</code> disappears. The reason for this structure is that in geo-analysis and analysis of land plots the classic approach is to include an effect for each plot. If each plot can then further be indexed into subplot it is natural to think of "subplot" to be nested in "plot". Assuming you do not model autoregressive effects I would think of <code>time</code> as random for reasons that you already stated. Some years we'll have more dry and hotter weather than others. As the plots measured will have to be present in a given year, these would be nested in year.</p>
<p>This is what I'd call the <code>maximal</code> model and it might not be feasible depending on your amount of data. In this case I would try using <code>(1|time) + (1|plot/subplot)</code>. If both are feasible I would compare these models, either using bootstrapping methods or approximate LRT tests.</p>
<p><strong>Note:</strong> It seems not unlikely that <code>(1|time/plot/subplot)</code> would result in "individual level effects". Eg 1 random effect per row in your data. For reasons that I have long since forgotten (but once read) it is not plausible to have individual (also called subject-level) effects in binary mixed models. In this case It might also make sense to use the alternative approach or test whether your model assumptions are kept when withholding <code>subplot</code> from your random effects.</p>
<p>Below I've added some useful references, some of which are directly relevant to the question. In addition check out the glmmFAQ site by Ben Bolker and more.</p>
<h1>References</h1>
<ol>
<li>Bolker, B. et al. (2009). „Generalized linear mixed models: a practical guide for ecology and evolution“. In: Trends in ecology & evolution 24.3, p. 127–135.</li>
<li>Bolker, B. et al. (2011). „GLMMs in action: gene-by-environment interaction in total fruit production of wild populations of Arabidopsis thaliana“. In: Revised version, part 1 1, p. 127–135.</li>
<li>Eager, C. og J. Roy (2017). „Mixed effects models are sometimes terrible“. In: arXiv preprint arXiv:1701.04858. url: <a href="https://arxiv.org/abs/1701.04858" rel="nofollow noreferrer">https://arxiv.org/abs/1701.04858</a> (last seen 19.09.2019).</li>
<li>Feng, Cindy et al. (2017). „Randomized quantile residuals: an omnibus model diagnostic tool with unified reference distribution“. In: arXiv preprint arXiv:1708.08527. (last seen 19.09.2019).</li>
<li>Gelman, A. og Jennifer Hill (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. Cambridge University Press.</li>
<li>Hartig, F. (2019). DHARMa: Residual Diagnostics for Hierarchical (Multi-Level / Mixed) Regression Models. R package version 0.2.4. url: <a href="http://florianhartig.github.io/DHARMa/" rel="nofollow noreferrer">http://florianhartig.github.io/DHARMa/</a> (last seen 19.09.2019).</li>
<li>Lee, Y. og J. A. Nelder (2004). „Conditional and Marginal Models: Another View“. In: Statistical Science 19.2, p. 219–238.<br />
doi: 10.1214/088342304000000305. url: <a href="https://doi.org/10.1214/088342304000000305" rel="nofollow noreferrer">https://doi.org/10.1214/088342304000000305</a></li>
<li>Lin, D. Y. et al. (2002). „Model-checking techniques based on cumulative residuals“. In: Biometrics 58.1, p. 1–12. (last seen 19.09.2019).
Lin, X. (1997). „Variance Component Testing in Generalised Linear Models with Random Effects“. In: Biometrika 84.2, p. 309–326. issn: 00063444. url: <a href="http://www.jstor.org/stable/2337459" rel="nofollow noreferrer">http://www.jstor.org/stable/2337459</a>
(last seen 19.09.2019).</li>
<li>Stiratelli, R. et al. (1984). „Random-effects models for serial observations with binary response“. In:<br />
Biometrics, p. 961–971.</li>
</ol> | 2020-12-04 15:01:35.947000+00:00 | 2020-12-04 15:33:36.823000+00:00 | 2020-12-04 15:33:36.823000+00:00 | null | 65,129,483 | <p>Could someone help me to determine the correct random variable structure in my binomial GLMM in lme4?</p>
<p>I will first try to explain my data as best as I can. I have binomial data of seedlings that were eaten (1) or not eaten (0), together with data of vegetation cover. I try to figure out if there is a relationship between vegetation cover and the probability of a tree being eaten, as the other vegetation is a food source that could attract herbivores to a certain forest patch.</p>
<p>The data is collected in ~90 plots scattered over a National Park for 9 years now. Some were measured all years, some were measured only a few years (destroyed/newly added plots). The original datasets is split in 2 (deciduous vs coniferous), both containing ~55.000 entries. Per plot about 100 saplings were measured every time, so the two separate datasets probably contain about 50 trees per plot (though this will not always be the case, since the decid:conif ratio is not always equal). Each plot consists of 4 subplots.
<em>I am aware that there might be spatial autocorrelation due to plot placement, but we will not correct for this, yet.</em></p>
<p>Every year the vegetation is surveyed in the same period. Vegetation cover is estimated at plot-level, individual trees (binary) are measured at a subplot-level.
All trees are measured, so the amount of responses per subplot will differ between subplots and years, as the forest naturally regenerates.</p>
<p>Unfortunately, I cannot share my original data, but I tried to create an example that captures the essentials:</p>
<pre><code>#set seed for whole procedure
addTaskCallback(function(...) {set.seed(453);TRUE})
# Generate vector containing individual vegetation covers (in %)
cover1vec <- c(sample(0:100,10, replace = TRUE)) #the ',number' is amount of covers generated
# Create dataset
DT <- data.frame(
eaten = sample(c(0,1), 80, replace = TRUE),
plot = as.factor(rep(c(1:5), each = 16)),
subplot = as.factor(rep(c(1:4), each = 2)),
year = as.factor(rep(c(2012,2013), each = 8)),
cover1 = rep(cover1vec, each = 8)
)
</code></pre>
<p>Which will generate this dataset:</p>
<pre><code>>DT
eaten plot subplot year cover1
1 0 1 1 2012 4
2 0 1 1 2012 4
3 1 1 2 2012 4
4 1 1 2 2012 4
5 0 1 3 2012 4
6 1 1 3 2012 4
7 0 1 4 2012 4
8 1 1 4 2012 4
9 1 1 1 2013 77
10 0 1 1 2013 77
11 0 1 2 2013 77
12 1 1 2 2013 77
13 1 1 3 2013 77
14 0 1 3 2013 77
15 1 1 4 2013 77
16 0 1 4 2013 77
17 0 2 1 2012 46
18 0 2 1 2012 46
19 0 2 2 2012 46
20 1 2 2 2012 46
....etc....
80 0 5 4 2013 82
</code></pre>
<p><em>Note1:</em> to clarify again, in this example the number of responses is the same for every subplot:year combination, making the data balanced, which is not the case in the original dataset.
<em>Note2:</em> this example can not be run in a GLMM, as I get a singularity warning and all my random effect measurements are zero. Apparently my example is not appropriate to actually use (because using sample() caused the 0 and 1 to be in too even amounts to have large enough effects?).</p>
<p>As you can see from the example, cover data is the same for every plot:year combination.
Plots are measured multiple years (only 2012 and 2013 in the example), so there are <strong>repeated measures</strong>.
Additionally, a <strong>year effect</strong> is likely, given the fact that we have e.g. drier/wetter years.</p>
<p>First I thought about the following model structure:</p>
<pre><code>library(lme4)
mod1 <- glmer(eaten ~ cover1 + (1 | year) + (1 | plot), data = DT, family = binomial)
summary(mod1)
</code></pre>
<p>Where (1 | year) should correct for differences between years and (1 | plot) should correct for the repeated measures.</p>
<p>But then I started thinking: all trees measured in plot 1, during year 2012 will be more similar to each other than when they are compared with (partially the same) trees from plot 1, during year 2013.
So, I doubt that this random model structure will correct for this <em>within plot temporal effect</em>.</p>
<p>So my best guess is to add another random variable, where this "interaction" is accounted for.
I know of two ways to possibly achieve this:</p>
<p><em>Method 1.</em>
Adding the random variable " + (1 | year:plot)"</p>
<p><em>Method 2.</em>
Adding the random variable " + (1 | year/plot)"</p>
<p>From what other people told me, I still do not know the difference between the two.
I saw that <em>Method 2</em> added an extra random variable (year.1) compared to <em>Method 1</em>, but I do not know how to interpret that extra random variable.</p>
<p>As an example, I added the Random effects summary using <em>Method 2</em> (zeros due to singularity issues with my example data):</p>
<pre><code>Random effects:
Groups Name Variance Std.Dev.
plot.year (Intercept) 0 0
plot (Intercept) 0 0
year (Intercept) 0 0
year.1 (Intercept) 0 0
Number of obs: 80, groups: plot:year, 10; plot, 5; year, 2
</code></pre>
<p>Can someone explain me the actual difference between <em>Method 1</em> and <em>Method 2</em>?
I am trying to understand what is happening, but cannot grasp it.</p>
<p>I already tried to get advice from a colleague and he mentioned that it is likely more appropriate to use cbind(success, failure) per plot:year combination.
Via this site I found that cbind is used in binomial models when <em>Ntrails > 1</em>, which I think is indeed the case given our sampling procedure.</p>
<p>I wonder, if cbind is already used on a plot:year combination, whether I need to add a plot:year random variable?
When using cbind, the example data would look like this:</p>
<pre><code>>DT3
plot year cover1 Eaten_suc Eaten_fail
8 1 2012 4 4 4
16 1 2013 77 4 4
24 2 2012 46 2 6
32 2 2013 26 6 2
40 3 2012 91 2 6
48 3 2013 40 3 5
56 4 2012 61 5 3
64 4 2013 19 2 6
72 5 2012 19 5 3
80 5 2013 82 2 6
</code></pre>
<p><strong>What would be the correct random model structure and why?</strong>
I was thinking about:</p>
<p>Possibility A</p>
<pre><code>mod4 <- glmer(cbind(Eaten_suc, Eaten_fail) ~ cover1 + (1 | year) + (1 | plot),
data = DT3, family = binomial)
</code></pre>
<p>Possibility B</p>
<pre><code>mod5 <- glmer(cbind(Eaten_suc, Eaten_fail) ~ cover1 + (1 | year) + (1 | plot) + (1 | year:plot),
data = DT3, family = binomial)
</code></pre>
<p>But doesn't cbind(success, failure) already correct for the year:plot dependence?</p>
<p>Possibility C</p>
<pre><code>mod6 <- glmer(cbind(Eaten_suc, Eaten_fail) ~ cover1 + (1 | year) + (1 | plot) + (1 | year/plot),
data = DT3, family = binomial)
</code></pre>
<p>As I do not yet understand the difference between year:plot and year/plot</p>
<p><strong>Thus: Is it indeed more appropriate to use the cbind-method than the raw binary data? And what random model structure would be necessary to prevent pseudoreplication and other dependencies?</strong></p>
<p>Thank you in advance for your time and input!</p>
<p>EDIT 7/12/20: I added some extra information about the original data</p> | 2020-12-03 15:57:25.250000+00:00 | 2020-12-07 14:15:32.227000+00:00 | 2020-12-07 14:15:32.227000+00:00 | r|glm|lme4|mixed-models|random-effects | ['https://cran.r-project.org/web/packages/lme4/', 'http://bbolker.github.io/mixedmodels-misc/glmmFAQ.html#lme4-1', 'https://cran.r-project.org/web/packages/lme4/vignettes/lmer.pdf', 'https://stackoverflow.com/questions/62469170/lmer-or-binomial-glmm/62563069#62563069', 'https://cran.r-project.org/web/packages/DHARMa/index.html', 'https://arxiv.org/abs/1701.04858', 'http://florianhartig.github.io/DHARMa/', 'https://doi.org/10.1214/088342304000000305', 'http://www.jstor.org/stable/2337459'] | 9 |
72,323,549 | <p>Mutual information is defined for distribution and not individual points. So, I will write the next part assuming v1 and v2 are samples from a distribution, p. I will also take that you have n samples from p, n>1.</p>
<p>You want a method to estimate mutual information from samples. There are many ways to do this. One of the simplest ways to do this would be to use a non-parametric estimator like NPEET (<a href="https://github.com/gregversteeg/NPEET" rel="nofollow noreferrer">https://github.com/gregversteeg/NPEET</a>). It works with numpy (you can convert from torch to numpy for this). There are more involved parametric models for which you may be able to find implementation in pytorch (See <a href="https://arxiv.org/abs/1905.06922" rel="nofollow noreferrer">https://arxiv.org/abs/1905.06922</a>).</p>
<p>If you only have two vectors and want to compute a similarity measure, a dot product similarity would be more suitable than mutual information as there is no distribution.</p> | 2022-05-20 18:45:03.220000+00:00 | 2022-05-20 22:49:34.297000+00:00 | 2022-05-20 22:49:34.297000+00:00 | null | 72,323,285 | <p>I am training a model with pytorch, where I need to calculate the degree of dependence between two tensors (let's say they are the two tensors each containing values very close to zero or one, e.g. v1 = [0.999, 0.998, 0.001, 0.98] and v2 = [0.97, 0.01, 0.997, 0.999]) as a part of my loss function. I am trying to calculate <a href="https://en.wikipedia.org/wiki/Mutual_information" rel="nofollow noreferrer">mutual information</a>, but I can't find any mutual information estimation implementation in PyTorch. Has such a thing been provided anywhere?</p> | 2022-05-20 18:19:56.937000+00:00 | 2022-05-22 17:30:14.930000+00:00 | 2022-05-22 17:30:14.930000+00:00 | pytorch|entropy|mutual-information | ['https://github.com/gregversteeg/NPEET', 'https://arxiv.org/abs/1905.06922'] | 2 |
65,987,266 | <p>I'm not sure if there is standard practice, but what I saw the others have done is to simply take the average of the sub-tokens embeddings. example: <a href="https://arxiv.org/abs/2006.01346" rel="nofollow noreferrer">https://arxiv.org/abs/2006.01346</a>, Section 2.3 line 4</p> | 2021-02-01 04:43:22.710000+00:00 | 2021-02-01 04:43:22.710000+00:00 | null | null | 65,976,277 | <p>As you may know, <code>RoBERTa (BERT, etc.)</code> has its own tokenizer and sometimes you get pieces of given word as tokens, e.g. embeddings » embed, #dings</p>
<p>Since the nature of the task I am working on, I need a single representation for each word. How do I get it?</p>
<p><strong>CLEARANCE:</strong></p>
<blockquote>
<p>sentence: "embeddings are good" --> 3 word tokens given<br />
output: [embed,#dings,are,good] --> 4 tokens are out</p>
</blockquote>
<p>When I give <em>sentence</em> to pre-trained RoBERTa, I get encoded tokens. At the end I need representation for each token. Whats the solution? <strong>Summing embed + #dings tokens point-wise?</strong></p> | 2021-01-31 06:13:17.343000+00:00 | 2021-02-01 04:43:22.710000+00:00 | 2021-01-31 08:36:56.687000+00:00 | word-embedding|bert-language-model|pre-trained-model|roberta | ['https://arxiv.org/abs/2006.01346'] | 1 |
47,935,024 | <p>Using only the last hidden state without attention has insufficient representation power, especially when the hidden size is small. A few systems prior the invention of attention are </p>
<p><a href="https://arxiv.org/abs/1409.3215" rel="nofollow noreferrer">https://arxiv.org/abs/1409.3215</a></p>
<p><a href="https://arxiv.org/abs/1506.05869" rel="nofollow noreferrer">https://arxiv.org/abs/1506.05869</a></p> | 2017-12-22 02:05:06.437000+00:00 | 2017-12-22 02:05:06.437000+00:00 | null | null | 44,081,665 | <p>Are there any successful application of deep seq2seq model where the decoder read ONLY the encoder's output state (final step of encoder's internal state) at its first step, and carry out multiple steps decoding? </p>
<p>I.e. no peeking, no attention etc. At each step the decoder's input is only the previous step's output and state.</p>
<p>I could see a few seq2seq autoencoder implementation, wonder if they really converge after a long time of training, especially when the internal state is small.</p> | 2017-05-20 03:38:29.490000+00:00 | 2017-12-22 02:05:06.437000+00:00 | null | deep-learning|autoencoder | ['https://arxiv.org/abs/1409.3215', 'https://arxiv.org/abs/1506.05869'] | 2 |
67,453,765 | <p>For generalized linear models (i.e. logistic regression, ridge regression, poisson regression),
you can efficiently tune many regularization hyperparameters
using exact derivatives and approximate leave-one cross-validation.</p>
<p>But don't stop at just the gradient, compute the full hessian and use a second-order optimizer -- it's
both more efficient and robust.</p>
<p>sklearn doesn't currently have this functionality, but there are other tools available that can do it.</p>
<p>For example, here's how you can use the python package <a href="https://buildingblock.ai/" rel="nofollow noreferrer">bbai</a> to fit the
hyperparameter for ridge regularized logistic regression to maximize the log likelihood of the
approximate leave-one-out cross-validation of the training data set for the Wisconsin Breast Cancer Data Set.</p>
<p><strong>Load the data set</strong></p>
<pre><code>from sklearn.datasets import load_breast_cancer
from sklearn.preprocessing import StandardScaler
data = load_breast_cancer()
X = data['data']
X = StandardScaler().fit_transform(X)
y = data['target']
</code></pre>
<p><strong>Fit the model</strong></p>
<pre><code>import bbai.glm
model = bbai.glm.LogisticRegression()
# Note: it automatically fits the C parameter to minimize the error on
# the approximate leave-one-out cross-validation.
model.fit(X, y)
</code></pre>
<p>Because it uses use both the gradient and hessian with efficient exact formulas
(no automatic differentiation), it can dial into an exact hyperparameter quickly with only a few
evaluations.</p>
<p>YMMV, but when I compare it to sklearn's LogisticRegressionCV with default parameters, it runs
in a fraction of the time.</p>
<pre><code>t1 = time.time()
model = bbai.glm.LogisticRegression()
model.fit(X, y)
t2 = time.time()
print('***** approximate leave-one-out optimization')
print('C = ', model.C_)
print('time = ', (t2 - t1))
from sklearn.linear_model import LogisticRegressionCV
print('***** sklearn.LogisticRegressionCV')
t1 = time.time()
model = LogisticRegressionCV(scoring='neg_log_loss', random_state=0)
model.fit(X, y)
t2 = time.time()
print('C = ', model.C_[0])
print('time = ', (t2 - t1))
</code></pre>
<p>Prints</p>
<pre><code>***** approximate leave-one-out optimization
C = 0.6655139682151275
time = 0.03996014595031738
***** sklearn.LogisticRegressionCV
C = 0.3593813663804626
time = 0.2602980136871338
</code></pre>
<h2>How it works</h2>
<p>Approximate leave-one-out cross-validation (ALOOCV) is a close approimxation to leave-one-out
cross-validation that's much more efficient to evaluate for generalized linear models.</p>
<p>It first fits the regularized model. Then uses a single step of Newton's algorithm to approximate what
the model weights would be when we leave a single data point out. If the regularized cost function for
the generalized linear model is represented as</p>
<img src="https://i.stack.imgur.com/zZ0zp.png" width="300"/>
<p>Then the ALOOCV can be computed as</p>
<img src="https://i.stack.imgur.com/JICyA.png" width="350"/>
<p>where</p>
<img src="https://i.stack.imgur.com/FLMS7.png" width="300"/>
<p>(Note: H represents the hessian of the cost function at the optimal weights)</p>
<p>For more background on ALOOCV, you can check out this <a href="https://buildingblock.ai/logistic-regression-guide#approximate-leave-one-out-cross-validation" rel="nofollow noreferrer">guide</a>.</p>
<p>It's also possible to compute exact derivatives for ALOOCV which makes it efficient to optimize.</p>
<p>I won't put the derivative formulas here as they are quite involved, but see the paper
<a href="https://arxiv.org/abs/2011.10218" rel="nofollow noreferrer">Optimizing Approximate Leave-one-out Cross-validation</a>.</p>
<p>If we plot out ALOOCV and compare to leave-one-out cross-validation for the example data set,
you can see that it tracks it very closely and the ALOOCV optimum is nearly the same as the
LOOCV optimum.</p>
<p><strong>Compute Leave-one-out Cross-validation</strong></p>
<pre><code>import numpy as np
def compute_loocv(X, y, C):
model = bbai.glm.LogisticRegression(C=C)
n = len(y)
loo_likelihoods = []
for i in range(n):
train_indexes = [i_p for i_p in range(n) if i_p != i]
test_indexes = [i]
X_train, X_test = X[train_indexes], X[test_indexes]
y_train, y_test = y[train_indexes], y[test_indexes]
model.fit(X_train, y_train)
pred = model.predict_proba(X_test)[0]
loo_likelihoods.append(pred[y_test[0]])
return sum(np.log(loo_likelihoods))
</code></pre>
<p><strong>Compute Approximate Leave-one-out Cross-validation</strong></p>
<pre><code>import scipy
def fit_logistic_regression(X, y, C):
model = bbai.glm.LogisticRegression(C=C)
model.fit(X, y)
return np.array(list(model.coef_[0]) + list(model.intercept_))
def compute_hessian(p_vector, X, alpha):
n, k = X.shape
a_vector = np.sqrt((1 - p_vector)*p_vector)
R = scipy.linalg.qr(a_vector.reshape((n, 1))*X, mode='r')[0]
H = np.dot(R.T, R)
for i in range(k-1):
H[i, i] += alpha
return H
def compute_alo(X, y, C):
alpha = 1.0 / C
w = fit_logistic_regression(X, y, C)
X = np.hstack((X, np.ones((X.shape[0], 1))))
n = X.shape[0]
y = 2*y - 1
u_vector = np.dot(X, w)
p_vector = scipy.special.expit(u_vector*y)
H = compute_hessian(p_vector, X, alpha)
L = np.linalg.cholesky(H)
T = scipy.linalg.solve_triangular(L, X.T, lower=True)
h_vector = np.array([np.dot(ti, ti) for pi, ti in zip(p_vector, T.T)])
loo_u_vector = u_vector - \
y * (1 - p_vector)*h_vector / (1 - p_vector*(1 - p_vector)*h_vector)
loo_likelihoods = scipy.special.expit(y*loo_u_vector)
return sum(np.log(loo_likelihoods))
</code></pre>
<p><strong>Plot out the results (along with the ALOOCV optimum)</strong></p>
<pre><code>import matplotlib.pyplot as plt
Cs = np.arange(0.1, 2.0, 0.1)
loocvs = [compute_loocv(X, y, C) for C in Cs]
alos = [compute_alo(X, y, C) for C in Cs]
fig, ax = plt.subplots()
ax.plot(Cs, loocvs, label='LOOCV', marker='o')
ax.plot(Cs, alos, label='ALO', marker='x')
ax.axvline(model.C_, color='tab:green', label='C_opt')
ax.set_xlabel('C')
ax.set_ylabel('Log-Likelihood')
ax.set_title("Breast Cancer Dataset")
ax.legend()
</code></pre>
<p>Displays</p>
<p><a href="https://i.stack.imgur.com/oMQsw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oMQsw.png" alt="enter image description here" /></a></p> | 2021-05-09 02:24:58.777000+00:00 | 2021-05-09 04:10:18.287000+00:00 | 2021-05-09 04:10:18.287000+00:00 | null | 43,420,493 | <p>Is there a way to perform hyperparameter tuning in scikit-learn by gradient descent? While a formula for the gradient of hyperparameters might be difficult to compute, numerical computation of the hyperparameter gradient by evaluating two close points in hyperparameter space should be pretty easy. Is there an existing implementation of this approach? Why is or isn't this approach a good idea?</p> | 2017-04-14 23:26:40.533000+00:00 | 2021-05-09 04:10:18.287000+00:00 | null | python|optimization|parameters|scikit-learn | ['https://buildingblock.ai/', 'https://buildingblock.ai/logistic-regression-guide#approximate-leave-one-out-cross-validation', 'https://arxiv.org/abs/2011.10218', 'https://i.stack.imgur.com/oMQsw.png'] | 4 |
53,796,844 | <p>Here are some papers describing gradient-based hyperparameter optimization:</p>
<ul>
<li><a href="https://arxiv.org/abs/1502.03492" rel="nofollow noreferrer">Gradient-based hyperparameter optimization through reversible learning</a> (2015):</li>
</ul>
<blockquote>
<p>We compute exact gradients of cross-validation performance with respect to all hyperparameters by chaining derivatives backwards through the entire training procedure. These gradients allow us to optimize thousands of hyperparameters, including step-size and momentum schedules, weight initialization distributions, richly parameterized regularization schemes, and neural network architectures. We compute hyperparameter gradients by exactly reversing the dynamics of stochastic gradient descent with momentum. </p>
</blockquote>
<ul>
<li><a href="https://arxiv.org/abs/1703.01785" rel="nofollow noreferrer">Forward and reverse gradient-based hyperparameter optimization</a> (2017):</li>
</ul>
<blockquote>
<p>We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm such as stochastic gradient descent. These procedures mirror two methods of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. Our formulation of the reverse-mode procedure is linked to previous work by Maclaurin et al. [2015] but does not require reversible dynamics. The forward-mode procedure is suitable for real-time hyperparameter updates, which may significantly speed up hyperparameter optimization on large datasets.</p>
</blockquote>
<ul>
<li><a href="https://arxiv.org/abs/1909.13371" rel="nofollow noreferrer">Gradient descent: the ultimate optimizer</a> (2019):</li>
</ul>
<blockquote>
<p>Working with any gradient-based machine learning algorithm involves the tedious task of tuning the optimizer's hyperparameters, such as the learning rate. There exist many techniques for automated hyperparameter optimization, but they typically introduce even more hyperparameters to control the hyperparameter optimization process. We propose to instead learn the hyperparameters themselves by gradient descent, and furthermore to learn the hyper-hyperparameters by gradient descent as well, and so on ad infinitum. As these towers of gradient-based optimizers grow, they become significantly less sensitive to the choice of top-level hyperparameters, hence decreasing the burden on the user to search for optimal values.</p>
</blockquote> | 2018-12-15 20:09:46.520000+00:00 | 2019-10-04 14:25:51.547000+00:00 | 2019-10-04 14:25:51.547000+00:00 | null | 43,420,493 | <p>Is there a way to perform hyperparameter tuning in scikit-learn by gradient descent? While a formula for the gradient of hyperparameters might be difficult to compute, numerical computation of the hyperparameter gradient by evaluating two close points in hyperparameter space should be pretty easy. Is there an existing implementation of this approach? Why is or isn't this approach a good idea?</p> | 2017-04-14 23:26:40.533000+00:00 | 2021-05-09 04:10:18.287000+00:00 | null | python|optimization|parameters|scikit-learn | ['https://arxiv.org/abs/1502.03492', 'https://arxiv.org/abs/1703.01785', 'https://arxiv.org/abs/1909.13371'] | 3 |
73,489,380 | <p>Be careful when using weight decay with the vanilla Adam optimizer, as it appears that the vanilla Adam formula is wrong when using weight decay, as pointed out in the article <em>Decoupled Weight Decay Regularization</em> <a href="https://arxiv.org/abs/1711.05101" rel="nofollow noreferrer">https://arxiv.org/abs/1711.05101</a> .</p>
<p>You should probably use the AdamW variant when you want to use Adam with weight decay.</p> | 2022-08-25 14:32:10.597000+00:00 | 2022-08-25 14:32:10.597000+00:00 | null | null | 39,517,431 | <p>I'm training a network for image localization with Adam optimizer, and someone suggest me to use exponential decay. I don't want to try that because Adam optimizer itself decays learning rate. But that guy insists and he said he did that before. So should I do that and is there any theory behind your suggestion?</p> | 2016-09-15 17:54:15.810000+00:00 | 2022-08-25 14:32:10.597000+00:00 | 2019-10-10 10:53:36.883000+00:00 | neural-network|tensorflow | ['https://arxiv.org/abs/1711.05101'] | 1 |
39,518,837 | <p>In my experience it usually not necessary to do learning rate decay with Adam optimizer.</p>
<p>The theory is that Adam already handles learning rate optimization (<a href="http://arxiv.org/pdf/1412.6980v8.pdf" rel="noreferrer">check reference</a>) :</p>
<blockquote>
<p>"We propose Adam, a method for efficient stochastic optimization that
only requires first-order gradients with little memory requirement.
The method <strong>computes individual adaptive learning rates</strong> for different
parameters from estimates of first and second moments of the
gradients; the name Adam is derived from adaptive moment estimation."</p>
</blockquote>
<p>As with any deep learning problem YMMV, one size does not fit all, you should try different approaches and see what works for you, etc. etc.</p> | 2016-09-15 19:24:08.540000+00:00 | 2019-03-14 16:32:56.110000+00:00 | 2019-03-14 16:32:56.110000+00:00 | null | 39,517,431 | <p>I'm training a network for image localization with Adam optimizer, and someone suggest me to use exponential decay. I don't want to try that because Adam optimizer itself decays learning rate. But that guy insists and he said he did that before. So should I do that and is there any theory behind your suggestion?</p> | 2016-09-15 17:54:15.810000+00:00 | 2022-08-25 14:32:10.597000+00:00 | 2019-10-10 10:53:36.883000+00:00 | neural-network|tensorflow | ['http://arxiv.org/pdf/1412.6980v8.pdf'] | 1 |
51,897,289 | <p>since you are parsing from Tensorflow maybe it's better to see which layers TensorRT <em>DOES</em> support. As of TensorRT 4, these following layers are supported:</p>
<ul>
<li>Placeholder</li>
<li>Const</li>
<li>Add, Sub, Mul, Div, Minimum and Maximum</li>
<li>BiasAdd</li>
<li>Negative, Abs, Sqrt, Rsqrt, Pow, Exp and Log</li>
<li>FusedBatchNorm</li>
<li>ReLU, TanH, Sigmoid</li>
<li>SoftMax</li>
<li>Mean</li>
<li>ConcatV2</li>
<li>Reshape</li>
<li>Transpose</li>
<li>Conv2D</li>
<li>DepthwiseConv2dNative</li>
<li>ConvTranspose2D</li>
<li>MaxPool</li>
<li>AvgPool</li>
<li>Pad is supported if followed by one of these TensorFlow layers:
Conv2D, DepthwiseConv2dNative, MaxPool, and AvgPool</li>
</ul>
<p>From what I see in your logs you are trying to deploy LaneNet, is it the LaneNet of <a href="https://arxiv.org/pdf/1802.05591.pdf" rel="nofollow noreferrer">this paper</a>?</p>
<p>If that is the case it seems to be a variant of H-Net, haven't read about it but the architecture is the following, according to the paper:</p>
<p><a href="https://i.stack.imgur.com/wqCIG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wqCIG.png" alt="LaneNet Architecture"></a></p>
<p>So I see Convs, Relus, Maxpool and Linear, all of which are supported, don't know about that BN, maybe check that out to see which layer does it refer, if it is not on the list of supported networks you'll have to implement it from scratch.
Best of luck!</p> | 2018-08-17 14:12:51.473000+00:00 | 2018-08-17 14:12:51.473000+00:00 | null | null | 51,443,375 | <p>When I convert my tensorflow model (saved as .pb file) to uff file, error log like this:</p>
<pre><code>Using output node final/lanenet_loss/instance_seg
Using output node final/lanenet_loss/binary_seg
Converting to UFF graph
Warning: No conversion function registered for layer: Slice yet.
Converting as custom op Slice final/lanenet_loss/Slice
name: "final/lanenet_loss/Slice"
op: "Slice"
input: "final/lanenet_loss/Shape_1"
input: "final/lanenet_loss/Slice/begin"
input: "final/lanenet_loss/Slice/size"
attr {
key: "Index"
value {
type: DT_INT32
}
}
attr {
key: "T"
value {
type: DT_INT32
}
}
Traceback (most recent call last):
File "tfpb_to_uff.py", line 16, in <module>
uff_model = uff.from_tensorflow(graphdef=output_graph_def, output_filename=output_path, output_nodes=["final/lanenet_loss/instance_seg", "final/lanenet_loss/binary_seg"], text=True)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py", line 75, in from_tensorflow
name="main")
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 64, in convert_tf2uff_graph
uff_graph, input_replacements)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 51, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 28, in convert_layer
fields = cls.parse_tf_attrs(tf_node.attr)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 177, in parse_tf_attrs
for key, val in attrs.items()}
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 177, in <dictcomp>
for key, val in attrs.items()}
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 172, in parse_tf_attr_value
return cls.convert_tf2uff_field(code, val)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 146, in convert_tf2uff_field
return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
File "/home/dream/.local/lib/python3.5/site-packages/uff/converters/tensorflow/converter.py", line 74, in convert_tf2numpy_dtype
return np.dtype(dt[dtype])
TypeError: list indices must be integers or slices, not AttrValue
</code></pre>
<p>It meaning that the layer: 'Slice' is not supported by TensorRT currently.
So I plan to modify this layer in my code.
However, I can't locate 'Slice' layer in my code, even I get information about 'Slice' by function sess.graph.get_operation_by_name:</p>
<pre><code>graph list name: "final/lanenet_loss/Slice"
op: "Slice"
input: "final/lanenet_loss/Shape_1"
input: "final/lanenet_loss/Slice/begin"
input: "final/lanenet_loss/Slice/size"
attr {
key: "Index"
value {
type: DT_INT32
}
}
attr {
key: "T"
value {
type: DT_INT32
}
}
</code></pre>
<p>How can I locate the 'Slice' layer in my code lines so that I can modify it by TensorRT custom layer?</p> | 2018-07-20 13:14:37.887000+00:00 | 2018-08-17 14:12:51.473000+00:00 | null | tensorflow|tensorrt | ['https://arxiv.org/pdf/1802.05591.pdf', 'https://i.stack.imgur.com/wqCIG.png'] | 2 |
12,966,580 | <p>This is a recommendation problem. </p>
<p>First the apriori algorithm is no longer the state of the art of recommendation systems. (a related discussion is here: <a href="https://stackoverflow.com/questions/1255663/using-the-apriori-algorithm-for-recommendations">Using the apriori algorithm for recommendations</a>). </p>
<p>Check out Chapter 9 <strong>Recommendation System</strong> of the below book <strong>Mining of Massive Datasets</strong>. It's a good tutorial to start with.</p>
<p><a href="http://infolab.stanford.edu/~ullman/mmds.html" rel="nofollow noreferrer">http://infolab.stanford.edu/~ullman/mmds.html</a></p>
<p>Basically you have two different approaches: Content-based and collaborative filtering. The latter can be done in terms of item-based or user-based approach. There are also methods to combine the approaches to get better recommendations. </p>
<p>Some further readings that might be useful:</p>
<ul>
<li><p>A recent survey paper on recommendation systems:
<a href="http://arxiv.org/abs/1006.5278" rel="nofollow noreferrer">http://arxiv.org/abs/1006.5278</a></p></li>
<li><p>Amazon item-to-item collaborative filtering: <a href="http://www.cs.umd.edu/~samir/498/Amazon-Recommendations.pdf" rel="nofollow noreferrer">http://www.cs.umd.edu/~samir/498/Amazon-Recommendations.pdf</a></p></li>
<li><p>Matrix factorization techniques: <a href="http://research.yahoo4.akadns.net/files/ieeecomputer.pdf" rel="nofollow noreferrer">http://research.yahoo4.akadns.net/files/ieeecomputer.pdf</a></p></li>
<li><p>Netflix challenge: <a href="http://blog.echen.me/2011/10/24/winning-the-netflix-prize-a-summary/" rel="nofollow noreferrer">http://blog.echen.me/2011/10/24/winning-the-netflix-prize-a-summary/</a></p></li>
<li><p>Google news personalization: <a href="http://videolectures.net/google_datar_gnp/" rel="nofollow noreferrer">http://videolectures.net/google_datar_gnp/</a></p></li>
</ul>
<p>Some related stackoverflow topics:</p>
<ul>
<li><p><a href="https://stackoverflow.com/questions/1407841/how-to-create-my-own-recommendation-engine?rq=1">How to create my own recommendation engine?</a></p></li>
<li><p><a href="https://stackoverflow.com/questions/1592037/where-can-i-learn-about-recommendation-systems?rq=1">Where can I learn about recommendation systems?</a></p></li>
<li><p><a href="https://stackoverflow.com/questions/1992508/recommendation-systems-and-the-cold-start-problem?rq=1">How do I adapt my recommendation engine to cold starts?</a></p></li>
<li><p><a href="https://stackoverflow.com/questions/12778823/web-page-recommender-system">Web page recommender system</a></p></li>
</ul> | 2012-10-19 02:35:25+00:00 | 2012-10-19 06:25:37.523000+00:00 | 2017-05-23 11:48:10.173000+00:00 | null | 12,666,355 | <p>I working on a site that needs to present a set of options that have no particular order. I need to sort this list based on the customer that is viewing the list. I thought of doing this by generating recommendation rules and sorting the list putting the best suited to be liked by the customer on the top. Furthermore I think I'd be cool that if the confidence in the recommendation is high, I can tell the customer why I'm recommending that.</p>
<p>For example, lets say we have an icecream joint who has website where customers can register and make orders online. The customer information contains basic info like gender, DOB, address, etc. My goal is mining previous orders made by customers to generate rules with the format</p>
<pre><code> feature -> flavor
</code></pre>
<p>where feature would be either information in the profile or in the order itself (like, for example, we might ask how many people are you expecting to serve, their ages, etc).
I would then pull the rules that apply to the current customer and use the ones with higher confidence on the top of the list.</p>
<p>My question, what's the best standar algorithm to solve this? I have some experience in apriori and initially I thought of using it but since I'm interested in having only 1 consequent I'm thinking now that maybe other alternatives might be better suited. But in any case I'm not that knowledgeable about machine learning so I'd appreciate any help and references.</p> | 2012-10-01 00:38:22.690000+00:00 | 2012-10-19 06:25:37.523000+00:00 | null | machine-learning|recommendation-engine | ['https://stackoverflow.com/questions/1255663/using-the-apriori-algorithm-for-recommendations', 'http://infolab.stanford.edu/~ullman/mmds.html', 'http://arxiv.org/abs/1006.5278', 'http://www.cs.umd.edu/~samir/498/Amazon-Recommendations.pdf', 'http://research.yahoo4.akadns.net/files/ieeecomputer.pdf', 'http://blog.echen.me/2011/10/24/winning-the-netflix-prize-a-summary/', 'http://videolectures.net/google_datar_gnp/', 'https://stackoverflow.com/questions/1407841/how-to-create-my-own-recommendation-engine?rq=1', 'https://stackoverflow.com/questions/1592037/where-can-i-learn-about-recommendation-systems?rq=1', 'https://stackoverflow.com/questions/1992508/recommendation-systems-and-the-cold-start-problem?rq=1', 'https://stackoverflow.com/questions/12778823/web-page-recommender-system'] | 11 |
55,285,984 | <p>Try several models with different architectures/hyperparameters and see, which one performs the best.</p>
<p>For example, here is a <a href="https://arxiv.org/pdf/1703.01041.pdf" rel="nofollow noreferrer">paper on the subject</a>. The authors use an evolutionary meta-heuristic to build the best architecture.</p>
<p>In competitions, a useful technique is training an ensemble of models and averaging over their predictions.</p> | 2019-03-21 17:23:14.393000+00:00 | 2019-03-21 17:23:14.393000+00:00 | null | null | 55,266,853 | <p>I am doing image classification, I got train accuracy is 90 and validation is 85, please help me how to improve accuracy.This my model.</p>
<pre><code>model = Models.Sequential()
model.add(Layers.Conv2D(200,kernel_size=(3,3),activation='relu',input_shape=(64,64,3)))
model.add(Layers.Conv2D(180,kernel_size=(3,3),activation='relu'))
model.add(Layers.MaxPool2D(2,2))
model.add(Layers.Conv2D(180,kernel_size=(3,3),activation='relu'))
model.add(Layers.Conv2D(140,kernel_size=(3,3),activation='relu'))
model.add(Layers.Conv2D(100,kernel_size=(3,3),activation='relu'))
model.add(Layers.Conv2D(50,kernel_size=(3,3),activation='relu'))
model.add(Layers.MaxPool2D(2,2))
model.add(Layers.Flatten())
model.add(Layers.Dense(180,activation='relu'))
model.add(Layers.Dropout(rate=0.5))
model.add(Layers.Dense(100,activation='relu'))
model.add(Layers.Dropout(rate=0.5))
model.add(Layers.Dense(50,activation='relu'))
model.add(Layers.Dropout(rate=0.5))
model.add(Layers.Dense(6,activation='softmax'))
model.compile(optimizer=Optimizer.Adam(lr=0.0001),loss='sparse_categorical_crossentropy',metrics=['accuracy'])
SVG(model_to_dot(model).create(prog='dot', format='svg'))
Utils.plot_model(model,to_file='model.png',show_shapes=True)
model.summary()
</code></pre>
<p>this is my epochs:</p>
<pre><code>Epoch 28/35
11923/11923 [==============================] - 58s 5ms/sample - loss: 0.3929 - acc: 0.8777 - val_loss: 0.4905 - val_acc: 0.8437
Epoch 29/35
11923/11923 [==============================] - 59s 5ms/sample - loss: 0.3621 - acc: 0.8849 - val_loss: 0.5938 - val_acc: 0.8394
Epoch 30/35
11923/11923 [==============================] - 58s 5ms/sample - loss: 0.3541 - acc: 0.8865 - val_loss: 0.4860 - val_acc: 0.8570
Epoch 31/35
11923/11923 [==============================] - 58s 5ms/sample - loss: 0.3460 - acc: 0.8909 - val_loss: 0.5066 - val_acc: 0.8450
Epoch 32/35
11923/11923 [==============================] - 58s 5ms/sample - loss: 0.3151 - acc: 0.9001 - val_loss: 0.5091 - val_acc: 0.8517
Epoch 33/35
11923/11923 [==============================] - 58s 5ms/sample - loss: 0.3184 - acc: 0.9025 - val_loss: 0.5097 - val_acc: 0.8431
Epoch 34/35
11923/11923 [==============================] - 58s 5ms/sample - loss: 0.3049 - acc: 0.9015 - val_loss: 0.5694 - val_acc: 0.8491
Epoch 35/35
11923/11923 [==============================] - 58s 5ms/sample - loss: 0.2896 - acc: 0.9085 - val_loss: 0.5293 - val_acc: 0.8464
</code></pre>
<p>please help me on how to reduce the error rate.</p> | 2019-03-20 17:27:43.620000+00:00 | 2019-03-21 17:23:14.393000+00:00 | 2019-03-20 17:32:39.887000+00:00 | python|image-processing|keras|deep-learning|conv-neural-network | ['https://arxiv.org/pdf/1703.01041.pdf'] | 1 |
36,475,145 | <p>Finally I found the solution.
Check CARSkit alg</p>
<p><a href="http://arxiv.org/pdf/1511.03780v1.pdf" rel="nofollow">http://arxiv.org/pdf/1511.03780v1.pdf</a></p> | 2016-04-07 11:41:27.347000+00:00 | 2016-04-07 11:41:27.347000+00:00 | null | null | 36,171,281 | <p>I need to build recommender with UserId,ItemId,Preference,Duration as input where Duration is the time(Time the user viewed the page). Already i build itembased and userbased recommender with userid,itemid,preference as input. So i need to increase the input parameter which affects the recommendation.</p>
<p>For example- 1012,112,4.0,3.45 is my input. </p>
<p>where 1012 is userid
112 is itemid
4.0 is preference
3.45 is duration </p>
<p>Thanks for the guidance in advance.</p> | 2016-03-23 06:34:22.660000+00:00 | 2016-04-07 11:41:27.347000+00:00 | null | java | ['http://arxiv.org/pdf/1511.03780v1.pdf'] | 1 |
52,273,888 | <p>I'm sorry to say that probably there isn't a simple and easy way to go with something like this:</p>
<p>Since you're dealing with a database I will assume you have a wide range of possible questions, and that using a simple synonym table will not do</p>
<h1><a href="https://en.wikipedia.org/wiki/Natural_language_processing" rel="nofollow noreferrer">Natural Language Processing</a> (NLP)</h1>
<p>This is a very active research topic in Machine Learning and, in a nutshell, deals with automatically making sense from text. For your particular scenario. To get some intuition about it, and because it applies perfectly to your question, I would recommend starting with: <a href="https://ieeexplore.ieee.org/abstract/document/4438554/" rel="nofollow noreferrer">Question Similarity Calculation for FAQ Answering</a> by Song et al. (2007)</p>
<p>For a state-of-the-art tool that will help with your application, I suggest <a href="https://arxiv.org/abs/1301.3781" rel="nofollow noreferrer">word2vec</a> (that's the paper, but you might also want to follow a <a href="http://kavita-ganesan.com/gensim-word2vec-tutorial-starter-code/" rel="nofollow noreferrer">tutorial</a>.</p>
<h1>Other options</h1>
<p>If NLP looks more complex than what you're aiming for, I would suggest looking at word similarities, such as: </p>
<ul>
<li><a href="https://en.wikipedia.org/wiki/Levenshtein_distance" rel="nofollow noreferrer">Levenshtein distance</a></li>
<li><a href="https://en.wikipedia.org/wiki/Hamming_distance" rel="nofollow noreferrer">Hamming distance</a></li>
</ul>
<p>These, however, will not perform as well as a well trained NLP system. </p> | 2018-09-11 10:29:27.403000+00:00 | 2018-09-11 10:29:27.403000+00:00 | null | null | 52,273,070 | <p>I am writing a Telegram bot that answers to people's questions about a specific city. I wanted to write a piece of code that compares the message with the questions I have in my sqlite database table.</p>
<p>The biggest problem is that I cannot use <code>difflib.get_close_matches</code>, because <strong>the questions are not in English</strong> and at the moment I'm only handling exactly matching strings, for example:</p>
<pre><code>if msg.lower() == "what can you do?":
send_message("I can answer to any question you have about...", chat_id)
</code></pre>
<p>And that's definitely NOT my aim</p>
<p>So, let's get to the code: I'm using this function to get the last message</p>
<pre><code>URL = "https://api.telegram.org/bot{}/".format(TOKEN)
def get_updates(offset=None):
url = URL + "getUpdates"
if offset:
url += "?offset={}".format(offset)
js = get_json_from_url(url)
return js
</code></pre>
<p>and assign the returned value to the variable <code>updates</code>, as a result I will have the message text in <code>update["message"]["text"]</code></p>
<p>Now the difficult part, I would have to compare the string to the db records, then, if there is not any similar match, I will have to find synonyms of the words in the message and re-compare them to the records. </p>
<p>BUT this would make the program run awfully slow and I don't really have time nor will to make a list of synonyms for every possible word</p>
<p>Can anybody help me find the way to make a comparison and find a similar string in the db keeping the program as fast as possible?</p> | 2018-09-11 09:42:16.170000+00:00 | 2018-09-11 10:29:27.403000+00:00 | 2018-09-11 09:46:02.470000+00:00 | python|python-3.x|chatbot | ['https://en.wikipedia.org/wiki/Natural_language_processing', 'https://ieeexplore.ieee.org/abstract/document/4438554/', 'https://arxiv.org/abs/1301.3781', 'http://kavita-ganesan.com/gensim-word2vec-tutorial-starter-code/', 'https://en.wikipedia.org/wiki/Levenshtein_distance', 'https://en.wikipedia.org/wiki/Hamming_distance'] | 6 |
49,528,620 | <p>You are confusing two tasks: <a href="https://stackoverflow.com/q/33947823/1714410"><strong>semantic</strong> segmentation</a> and <a href="https://stackoverflow.com/a/43081392/1714410"><strong>instance</strong> segmentation</a>.<br>
DeepLbV3+ (and many similar deep nets) are solving <strong>semantic</strong> segmentation problem: that is labeling each pixel with the class it belongs to. You got a very nice results where all pixels belonging to "person" were colored pink. <strong>Semantic</strong> segmentation algorithms do not care how many "person"s there are in the image and they do not wish and do not care to label each person separately. As long as all "person" pixels were labeled as such - the task is considred well done.</p>
<p>On the other hand, what you are looking for is <strong>instance</strong> segmentation: that is labeling each "person" as a <strong>unique</strong> person in the image. This is far more complex task: not only should you succeed in labeling all "person" pixels as "person", but also you want to group the "person" pixels into the different <strong>instances</strong> in the image.<br>
Since <strong>instance</strong> segmentation is a more difficult task, you would need different models/nets to accomplish it.<br>
I suggest <a href="https://arxiv.org/abs/1703.06870" rel="noreferrer">Mask R-CNN</a> as a good starting point for instance segmentation algorithms.</p> | 2018-03-28 07:34:40.837000+00:00 | 2018-03-28 07:34:40.837000+00:00 | null | null | 49,527,430 | <p>I able to apply <a href="https://github.com/tensorflow/models/tree/master/research/deeplab" rel="nofollow noreferrer" title="DeepLav3+">DeepLabV3+</a> to segment the images, but also like to get the boundary around individual detection. </p>
<p><img src="https://i.stack.imgur.com/SaftK.png" alt="image segmentation mask"></p>
<p>For example, in the image segmentation mask above, I cannot distinguish between the two children on the horse. If I could draw the boundary around each individual children or put a different color for them, I would be able to distinguish them. Please let me know if is there any way to configure deepLab to achieve that. </p> | 2018-03-28 06:20:48.327000+00:00 | 2019-07-14 17:44:19.230000+00:00 | 2018-03-28 07:34:57.003000+00:00 | tensorflow|neural-network|computer-vision|deep-learning|image-segmentation | ['https://stackoverflow.com/q/33947823/1714410', 'https://stackoverflow.com/a/43081392/1714410', 'https://arxiv.org/abs/1703.06870'] | 3 |
48,305,447 | <p>This would depend on your configuration. </p>
<p>Kafka is backed by CP ZooKeeper for operations that require strong consistency, such as controller election (which decides on partition leaders), broker registration, dynamic configs, acl-s etc.<br>
As for the data you send to kafka - guarantees are <strong>configurable on producer level, per-topic basis or/and change broker defaults</strong>.</p>
<p>Out of the box with default config (<code>min.insync.replicas=1</code>, <code>default.replication.factor=1</code>) you are getting AP system (at-most-once).</p>
<p>If you want to achieve CP, you may set <code>min.insync.replicas=2</code> and topic replication factor of 3 - then producing a message with <code>acks=all</code> will guarantee CP setup (at-least-once), but (as expected) will block in cases when not enough replicas (<2) are available for particular topic/partition pair. (see <a href="https://kafka.apache.org/documentation/#design_ha" rel="nofollow noreferrer">design_ha</a>, <a href="https://kafka.apache.org/documentation/#producerconfigs" rel="nofollow noreferrer">producer config docs</a>)</p>
<p>Kafka pipeline can be further tuned in <a href="https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/" rel="nofollow noreferrer">exactly-once</a> direction..</p>
<p><strong>CAP and PACELC</strong><br>
In terms of PACELC some latency-improving decisions were already made into defaults. For example kafka by default does not <code>fsync</code> each message to disc - it writes to pagecache and let OS to deal with flushing. Defaults prefer to use replication for durability. Its configurable as well - see <code>flush.messages</code>, <code>flush.ms</code> broker/topic configurations.</p>
<p>Due to generic nature of messages it receives (its just a bytestream) - it cannot do any post partition merging, or using CRDTs tricks to guaranty availability during partition, and eventually restore consistency.</p>
<p>I dont see/know how you can <code>give up</code> consistency for latency during <code>normal operation</code> in kafka-s <strong>generic bytestream case</strong>. You might give up strong consistency (linearizability) and try to have '<em>more consistency</em>' (covering a bit more failure scenarios, or reducing size of data loss), but this is effectively tuning AP system for higher consistency rather that tuning CP for lower latency. </p>
<p>You might see AP/CP trade offs and configurations to be presented as at-least-once vs at-most-once vs exactly-once.</p>
<p><strong>Testing</strong><br>
In order to understand how this parameters affect latency - I think the best way is to <strong>test</strong> your setup with different params. Following command will generate 1Gb of data: </p>
<pre><code>kafka-producer-perf-test --topic test --num-records 1000000 --record-size 100 --throughput 10000000 --producer-props bootstrap.servers=kafka:9092 acks=all`
</code></pre>
<p>Then try to use different producer params: </p>
<pre><code>acks=1
acks=all
acks=1 batch.size=1000000 linger.ms=1000
acks=all batch.size=1000000 linger.ms=1000
</code></pre>
<p>Its easy to start cluster and start/stop/kill nodes to test some failure scenarios e.g. with <a href="https://github.com/confluentinc/cp-docker-images/blob/master/examples/enterprise-kafka/docker-compose.yml" rel="nofollow noreferrer">compose</a></p>
<p><strong>Links and references</strong><br>
You might check (unfortunately outdated, but still relevant to topic) <a href="https://aphyr.com/posts/293-jepsen-kafka" rel="nofollow noreferrer">jepsen test</a> and <a href="http://blog.empathybox.com/post/62279088548/a-few-notes-on-kafka-and-jepsen" rel="nofollow noreferrer">follow-up</a>, just to add some context on how this was evolving over time.</p>
<p>I highly encourage check some papers, which will give a bit more perspective:<br>
<a href="https://arxiv.org/abs/1509.05393" rel="nofollow noreferrer">A Critique of the CAP Theorem. Martin Kleppmann</a><br>
<a href="https://www.researchgate.net/publication/220476881_CAP_Twelve_years_later_How_the_Rules_have_Changed" rel="nofollow noreferrer">CAP Twelve years later: How the "Rules" have Changed. Eric Brewer</a> </p> | 2018-01-17 16:11:36.320000+00:00 | 2019-07-07 10:12:46.727000+00:00 | 2019-07-07 10:12:46.727000+00:00 | null | 48,271,491 | <p>I am starting to learn about Apache Kafka. This <a href="https://engineering.linkedin.com/kafka/intra-cluster-replication-apache-kafka" rel="noreferrer">https://engineering.linkedin.com/kafka/intra-cluster-replication-apache-kafka</a> article states that Kafka is a CA system inside the CAP-Theorem. So it focuses on consistency between replicas and also on overall availability.</p>
<p>I recently heard about an extension of the CAP-Theorem called PACELC (<a href="https://en.wikipedia.org/wiki/PACELC_theorem" rel="noreferrer">https://en.wikipedia.org/wiki/PACELC_theorem</a>).
This theorem could be visualized like this:</p>
<p><a href="https://i.stack.imgur.com/lV2pB.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lV2pB.png" alt="enter image description here"></a></p>
<p>My question is how Apache Kafka could be described in PACELC. I would think that Kafka focuses on consistency when a partition occurs but what otherwise if no partition occurs? Is the focus on low latancy or strong consistency?</p>
<p>Thanks!</p> | 2018-01-15 22:21:18.950000+00:00 | 2019-07-07 10:12:46.727000+00:00 | null | apache-kafka|bigdata|cap-theorem | ['https://kafka.apache.org/documentation/#design_ha', 'https://kafka.apache.org/documentation/#producerconfigs', 'https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/', 'https://github.com/confluentinc/cp-docker-images/blob/master/examples/enterprise-kafka/docker-compose.yml', 'https://aphyr.com/posts/293-jepsen-kafka', 'http://blog.empathybox.com/post/62279088548/a-few-notes-on-kafka-and-jepsen', 'https://arxiv.org/abs/1509.05393', 'https://www.researchgate.net/publication/220476881_CAP_Twelve_years_later_How_the_Rules_have_Changed'] | 8 |
51,180,608 | <p>The <code>3</code> is the number of input channels (<code>R</code>, <code>G</code>, <code>B</code>). That <code>64</code> is the number of channels (i.e. <em>feature maps</em>) in the output of the first convolution operation. So, the first conv layer takes a color (RGB) image as input, applies <code>11x11</code> kernel with a stride 4, and outputs <code>64</code> feature maps.</p>
<p>I agree that this is different from the number of channels (<code>96</code>, 48 in each GPU) in the architecture diagram (of original AlexNet implementation).</p>
<p>However, PyTorch does not implement the original Alexnet architecture. Rather it implements a variant of the AlexNet implementation described in the paper: <a href="https://arxiv.org/abs/1404.5997" rel="nofollow noreferrer"><code>One weird trick for parallelizing convolutional neural networks</code></a>.</p>
<p>Also, see <a href="http://cs231n.github.io/convolutional-networks/#conv" rel="nofollow noreferrer">cs231n - convolutional networks</a> for more details about how input, filters, stride, and padding equates to output after the conv operation.</p>
<hr>
<p>P.S: See <a href="https://github.com/pytorch/vision/issues/185" rel="nofollow noreferrer"><code>pytorch/vision/issues/185</code></a></p> | 2018-07-04 20:50:39.477000+00:00 | 2018-07-05 22:59:06.633000+00:00 | 2018-07-05 22:59:06.633000+00:00 | null | 51,180,135 | <p>I am specifically looking at the AlexNet architecture found here:
<a href="https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py" rel="nofollow noreferrer">https://github.com/pytorch/vision/blob/master/torchvision/models/alexnet.py</a></p>
<p>I am confused as to how they are getting the input and output channels. Based on my readings of the AlexNet, I can't figure out where they are getting <em>outputchannels = 64</em> from (as the second argument to the <code>Conv2d</code> function). Even if the <em>256</em> is split across 2 GPUs, that should give <em>128</em> rather than <em>64</em>. The input channel of 3 initially represents the color channels as per my assumption. However, the other input and output channels don't make sense to me either.</p>
<p>Could anyone clarify what the input and output channels are?</p>
<pre><code>class AlexNet(nn.Module):
def __init__(self, num_classes=1000):
super(AlexNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2), #why 64?
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
</code></pre>
<p><a href="https://i.stack.imgur.com/rPOTl.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rPOTl.jpg" alt="enter image description here" /></a></p> | 2018-07-04 19:56:58.173000+00:00 | 2021-06-23 00:29:17.807000+00:00 | 2021-06-23 00:29:17.807000+00:00 | image-processing|filter|deep-learning|conv-neural-network|pytorch | ['https://arxiv.org/abs/1404.5997', 'http://cs231n.github.io/convolutional-networks/#conv', 'https://github.com/pytorch/vision/issues/185'] | 3 |
31,737,751 | <p>More pointers:</p>
<ul>
<li><p><a href="https://en.wikipedia.org/wiki/Datalog" rel="nofollow">Datalog</a> has declarative semantics, but as a "Prolog without function symbols" it is not Prolog. See the excellent intro <em>"What You Always Wanted to Know About Datalog (And Never Dared to Ask)"</em> by Ceri, Gottlob and Tanca, 1989. Available via <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.210.1118&rank=1" rel="nofollow">CiteSeerX</a></p></li>
<li><p>Implementations of Prolog that use <a href="https://www.google.com/search?q=tabled+logic+programming&ie=utf-8&oe=utf-8" rel="nofollow">tabling</a> instead of depth-first search for added declarativeness (plus other nice features as I understand), like <a href="http://arxiv.org/abs/1012.5123" rel="nofollow">XSB</a>.</p></li>
</ul> | 2015-07-31 03:49:05.710000+00:00 | 2015-07-31 03:49:05.710000+00:00 | null | null | 31,674,831 | <p>I would like to formalize some knowledge and execute queries in what may referred to as fully-declarative <a href="http://www.w3.org/2005/rules/wg/wiki/Horn_Logic" rel="nofollow">Horn logic</a> (or, fully-declarative Prolog). Could anyone provide some guidelines on how to implement it? I briefly recap the fine description from the link above:</p>
<p>The formal language is that of (the core of) Prolog: a "program" is a set of rules and facts as in Prolog (including functions and variables and basically, containing only user defined predicates).</p>
<p>In contrast to Prolog, however, I am looking for an implementation that is sound and complete with respect to the standard declarative semantics of logic programs --- the least Herbrand model (i.e., the inductively defined set of ground terms). In theoretical work on logic programming this is usually the object of study, and it is well known that a sound and complete answer to queries can be attained (in the "recursively-enumerable" sense), for example, using SLD-resolution subject to the following conditions:</p>
<ul>
<li><strong>fair</strong> search for matching rules (e.g., Prolog's depth-first search is <strong>not</strong> fair);</li>
<li>unification with "<strong>occurs-check</strong>" (checking that a variable doesn't occur in a term with which it is unified).</li>
</ul>
<p>I am looking for a concise implementation that would build on existing capabilities, rather than inventing the wheel. Two of the more promising directions that I see are implementing it as a meta-interpreter of Prolog, or as part of some theorem prover. Could anyone with practical knowledge in these domains provide some guideline on how to implement it? Can it be easily implemented in <a href="http://minikanren.org/" rel="nofollow">miniKanren</a>?</p>
<hr>
<p><em>My intentions are to formalize some knowledge in a fully-declarative manner. The crucial characteristics of such a formalization is that it precisely corresponds to the mathematical notion of (monotone) induction, so that the knowledge and it's properties can be easily reasoned about with inductive arguments.</em></p> | 2015-07-28 11:21:51.700000+00:00 | 2015-08-09 23:53:24.510000+00:00 | 2015-08-09 23:53:24.510000+00:00 | prolog|theorem-proving|logic-programming|formal-verification|minikanren | ['https://en.wikipedia.org/wiki/Datalog', 'http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.210.1118&rank=1', 'https://www.google.com/search?q=tabled+logic+programming&ie=utf-8&oe=utf-8', 'http://arxiv.org/abs/1012.5123'] | 4 |
44,892,785 | <p>There are a lot of research going on in this area. There are roughly two lines of areas that deal with this:</p>
<ul>
<li>Efficient network architectures</li>
<li>Some post-processing for an already trained model </li>
</ul>
<p>You can have a look at <a href="https://github.com/hszhao/ICNet" rel="nofollow noreferrer">ICNet</a> Figure 1, where some architectures for fast inference for semantic segmentation are shown. Many of these models can be tweaked to do classification or other image processing tasks in real time. These models all have a low number of parameters compared to other networks and can be evaluated on embedded platforms.</p>
<p>For "post-hoc" optimizations you can look at TensorFlows Graph Transform Tool that do many of such for you: <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/README.md" rel="nofollow noreferrer">Graph Transform Tool</a>
or maybe look into the paper by Song Han <a href="https://arxiv.org/abs/1510.00149" rel="nofollow noreferrer">Deep Compression</a> where many of these ideas are described. Song Han has also given many great lectures in this area and you can e.g. find one in the <a href="http://cs231n.github.io/" rel="nofollow noreferrer">CS231n</a> class at Stanford.</p>
<p>The speed of the inference phase depend on a lot of other things than the number of parameters or neurons. So I don't think there is a rule of thumb for saying how many neurons are the maximum.</p> | 2017-07-03 19:32:18.763000+00:00 | 2017-07-03 19:32:18.763000+00:00 | null | null | 44,874,560 | <p>I have only little background knowledge about Neural Networks (NN).
However, up to know I learnt, that training the network is the actual expensive part. Processing data by an already trained network is much cheaper/faster, ultimately.</p>
<p>Still, I'm not entirely sure what the expensive parts are within the processing chain. As far as I know, it's mostly Matrix-Multiplication for standard layers. Not the cheapest operation, but definitly doable. On top, there are are other layers, like max-pooling, or activation-functions at each node, which might have higher complexities. Are those the bottle-necks?</p>
<p>Now, I wonder if "simple" Hardware provided by Smartphones or even cheap stand-alone Hardware like Raspberry PIs are capable of utilizing a (convolutional-) Neuronal Networks to do, for example, Image Processing, like Object Detection. Of course, I mean doing the calculations on the device itself, not by transmitting the data to a second, powerful machine or even a cloud, which does the calculations, before sending back the results to the smartphone. </p>
<p>If so, what are the maximum Neurons such a Network should have (e.g. how many layers and how many neurons per layer), roughly estimated. And last, are there any good either projects, or librarys, using NNs for reduced simpler Hardware?</p> | 2017-07-02 19:40:53.813000+00:00 | 2017-08-25 10:10:19.580000+00:00 | null | mobile|raspberry-pi|neural-network|deep-learning|time-complexity | ['https://github.com/hszhao/ICNet', 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/README.md', 'https://arxiv.org/abs/1510.00149', 'http://cs231n.github.io/'] | 4 |
65,627,361 | <h3>Poincaré Embeddings</h3>
<blockquote>
<p>However, while complex symbolic datasets often exhibit a latent hierarchical structure, state-of-the-art methods typically learn embeddings in Euclidean vector spaces, which do not account for this property. For this purpose, we introduce a new approach for learning hierarchical representations of symbolic data by embedding them into hyperbolic space – or more precisely into an n-dimensional Poincaré ball.</p>
</blockquote>
<p>Poincaré embeddings allow you to create hierarchical embeddings in a non-euclidean space. The vectors on the outside of the Poincaré ball are lower in hierarchy compared to the ones in the center.</p>
<p><a href="https://i.stack.imgur.com/uZQMe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uZQMe.png" alt="enter image description here" /></a></p>
<p>The transformation to map a Euclidean metric tensor to a Riemannian metric tensor is an open d-dimensional unit ball.</p>
<p><a href="https://i.stack.imgur.com/S2YhR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S2YhR.png" alt="enter image description here" /></a></p>
<p>Distances between 2 vectors in this non-euclidean space are calculated as</p>
<p><a href="https://i.stack.imgur.com/ZxoJ4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZxoJ4.png" alt="enter image description here" /></a></p>
<p>The <a href="https://arxiv.org/pdf/1705.08039.pdf" rel="nofollow noreferrer">research paper for Poincaré embeddings</a> is wonderfully written and you will find some wonderful implementations in popular libraries for them as well. Needless to say, they are under-rated.</p>
<p>Two implementations that you can use are found in -</p>
<ul>
<li><code>tensorflow_addons.PoincareNormalize</code></li>
<li><code>gensim.models.poincare</code></li>
</ul>
<h3>Tensorflow Addons implementation</h3>
<p>According to the documentation, for a 1D tensor, <code>tfa.layers.PoincareNormalize</code> computes the following output along axis=0.</p>
<pre><code> (x * (1 - epsilon)) / ||x|| if ||x|| > 1 - epsilon
output =
x otherwise
</code></pre>
<p>For a higher dimensional tensor, it independently normalizes each 1-D slice along the <strong>dimension axis</strong>.</p>
<p>This transformation can be simply applied to an embedding of n-dims. Let's create a 5 dim embedding for each element of the time series. The dimension axis=-1 in this case, which is mapped from a euclidean space to a non-euclidean space.</p>
<pre><code>from tensorflow.keras import layers, Model, utils
import tensorflow_addons as tfa
X = np.random.random((100,10))
y = np.random.random((100,))
inp = layers.Input((10,))
x = layers.Embedding(500, 5)(inp)
x = tfa.layers.PoincareNormalize(axis=-1)(x) #<-------
x = layers.Flatten()(x)
out = layers.Dense(1)(x)
model = Model(inp, out)
model.compile(optimizer='adam', loss='binary_crossentropy')
utils.plot_model(model, show_shapes=True, show_layer_names=False)
model.fit(X, y, epochs=3)
</code></pre>
<pre><code>Epoch 1/3
4/4 [==============================] - 0s 2ms/step - loss: 7.9455
Epoch 2/3
4/4 [==============================] - 0s 2ms/step - loss: 7.5753
Epoch 3/3
4/4 [==============================] - 0s 2ms/step - loss: 7.2429
<tensorflow.python.keras.callbacks.History at 0x7fbb14595310>
</code></pre>
<h3>Gensim implementation</h3>
<p>Another implementation Poincare embeddings can be found in Gensim. Its very similar to what you would use when working with Word2Vec from Gensim.</p>
<p>The process would be -</p>
<ol>
<li>Train Gensim embeddings (word2vec or poincare)</li>
<li>Initialize Embedding layer in Keras with embeddings</li>
<li>Set the embedding layer as non-trainable</li>
<li>Train model for the downstream task</li>
</ol>
<pre><code>from gensim.models.poincare import PoincareModel
relations = [('kangaroo', 'marsupial'), ('kangaroo', 'mammal'), ('gib', 'cat'), ('cow', 'mammal'), ('cat','pet')]
model = PoincareModel(relations, size = 2, negative = 2) #Change size for higher dims
model.train(epochs=10)
print('kangroo vs marsupial:',model.kv.similarity('kangaroo','marsupial'))
print('gib vs mammal:', model.kv.similarity('gib','mammal'))
print('Embedding for Cat: ', model.kv['cat'])
</code></pre>
<pre><code>kangroo vs marsupial: 0.9481239343527523
gib vs mammal: 0.5325816385250299
Embedding for Cat: [0.22193988 0.0776986 ]
</code></pre>
<p>More details on training and saving Poincare embeddings can be found <a href="https://radimrehurek.com/gensim/models/poincare.html" rel="nofollow noreferrer">here</a>.</p> | 2021-01-08 10:31:24.310000+00:00 | 2021-01-09 14:38:19.843000+00:00 | 2021-01-09 14:38:19.843000+00:00 | null | 64,047,435 | <p>I am trying to implement Poincaré embeddings as discussed in a paper by Facebook (<a href="https://arxiv.org/pdf/1705.08039.pdf" rel="nofollow noreferrer">Link</a>) for my hierarchical data. You may find a more accessible explanation of Poincaré embeddings <a href="https://medium.com/@srijithpoduval/explaining-poincar%C3%A9-embeddings-d7cb9e4a2bbf" rel="nofollow noreferrer">here</a>.</p>
<p>Based on the paper I have found some implementations for Tensorflow <a href="https://github.com/kousun12/tf_hyperbolic" rel="nofollow noreferrer">here</a> and <a href="https://github.com/qiangsiwei/poincare_embedding" rel="nofollow noreferrer">here</a> as well as <a href="https://www.tensorflow.org/addons/api_docs/python/tfa/layers/PoincareNormalize" rel="nofollow noreferrer">tfa.layers.PoincareNormalize</a> in Tensorflow Addons. The latter even had a link to the paper mentioned above, which makes me believe it could be a good starting point for me. However, I had no luck implementing tfa.layers.PoincareNormalize so far and also could not find any documentation except some generic information on the API page that I linked.</p>
<p>Does anyone know how this layer is supposed to be implemented to provide the embedding in hyperbolic space discussed in the paper? My starting point is an implementation with a standard Embedding layer as presented below (it is actually an entity embedding of a categorical variable)?</p>
<pre><code>input = Input(shape=(1, ))
model = Embedding(input_dim=my_input_dim,
output_dim=embed_dim, name="my_feature")(input)
model = Reshape(target_shape=(embed_dim, ))(model)
model = Dense(1)(model)
model = Activation('sigmoid')(model)
</code></pre>
<p>Simply replacing the Embedding layer by tfa.layers.PoincareNormalize does not work due to different inputs. I assume that it could be placed somwhere after the embedding layer so that for the back propagation step the "values" are projected into hyperbolic space on each iteration, but had no luck with that so far either.</p> | 2020-09-24 13:24:04.897000+00:00 | 2021-01-09 14:38:19.843000+00:00 | null | tensorflow|embedding|hyperbolic-function | ['https://i.stack.imgur.com/uZQMe.png', 'https://i.stack.imgur.com/S2YhR.png', 'https://i.stack.imgur.com/ZxoJ4.png', 'https://arxiv.org/pdf/1705.08039.pdf', 'https://radimrehurek.com/gensim/models/poincare.html'] | 5 |
50,776,135 | <p>Try reducing step size to increase acceptance rate. Optimal acceptance rate for HMC is around .651 (<a href="https://arxiv.org/abs/1001.4460" rel="nofollow noreferrer">https://arxiv.org/abs/1001.4460</a>). Not sure why you'd see negative values. Maybe floating point error near zero? Can you post some of the logs of your run?</p> | 2018-06-09 16:08:10.573000+00:00 | 2018-06-09 16:08:10.573000+00:00 | null | null | 50,762,204 | <p>I'm trying to fit a simple Dirichlet-Multinomial model in tensorflow probability. The concentration parameters are <code>gamma</code> and I have put a Gamma(1,1) prior distribution on them. This is the model, where S is the number of categories and N is the number of samples:</p>
<pre><code>def dirichlet_model(S, N):
gamma = ed.Gamma(tf.ones(S)*1.0, tf.ones(S)*1.0, name='gamma')
y = ed.DirichletMultinomial(total_count=500., concentration=gamma, sample_shape=(N), name='y')
return y
log_joint = ed.make_log_joint_fn(dirichlet_model)
</code></pre>
<p>However, when I try to sample from this using HMC, the acceptance rate is zero, and the initial draw for <code>gamma</code> contains negative values. Am I doing something wrong? Shouldn't negative proposals for the concentration parameters be rejected automatically? Below my sampling code:</p>
<pre><code>def target_log_prob_fn(gamma):
"""Unnormalized target density as a function of states."""
return log_joint(
S=S, N=N,
gamma=gamma,
y=y_new)
num_results = 5000
num_burnin_steps = 3000
states, kernel_results = tfp.mcmc.sample_chain(
num_results=num_results,
num_burnin_steps=num_burnin_steps,
current_state=[
tf.ones([5], name='init_gamma')*5,
],
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=target_log_prob_fn,
step_size=0.4,
num_leapfrog_steps=3))
gamma = states
with tf.Session() as sess:
[
gamma_,
is_accepted_,
] = sess.run([
gamma,
kernel_results.is_accepted,
])
num_accepted = np.sum(is_accepted_)
print('Acceptance rate: {}'.format(num_accepted / num_results))
</code></pre> | 2018-06-08 13:53:21.360000+00:00 | 2018-06-09 16:08:10.573000+00:00 | 2018-06-08 20:30:28.573000+00:00 | python-3.x|tensorflow-probability | ['https://arxiv.org/abs/1001.4460'] | 1 |
66,614,947 | <p>I advise you to read: Beltagy, Iz, Matthew E. Peters, and Arman Cohan. "Longformer: The long-document transformer." arXiv preprint arXiv:2004.05150 (2020).</p>
<p>The main goal of this paper is that it is able to receive long document sequence tokens as input and is able to process long-term cross-partition context across the document with a linear computational cost.</p>
<p>Here, the sliding window attention mechanism uses <code>n = 512</code> tokens instead of what is known in the BERT model which takes <code>N=512</code> tokens as input sequence length.</p>
<hr />
<p> Longformer: The Long-Document Transformer</p>
<p>GitHub: <a href="https://github.com/allenai/longformer" rel="nofollow noreferrer">https://github.com/allenai/longformer</a></p>
<p>Paper: <a href="https://arxiv.org/abs/2004.05150" rel="nofollow noreferrer">https://arxiv.org/abs/2004.05150</a></p> | 2021-03-13 15:02:55.287000+00:00 | 2021-03-13 15:02:55.287000+00:00 | null | null | 65,107,718 | <p>I am trying to build a search application for resumes which are in .pdf format. For a given search query like "who is proficient in Java and worked in an MNC", the output should be the CV which is most similar. My plan is to read pdf text and find the cosine similarity between the text and the query.</p>
<p>However, BERT has a problem with long documents. It supports a sequence length of only 512 but all my CVs have more than 1000 words. I am really stuck here. Methods like truncating the documents don't suit the purpose.</p>
<p>Is there any other model that can do this?</p>
<p>I could not find the right approach with models like Longformer and XLNet for this task.</p>
<pre><code>module_url = "https://tfhub.dev/google/universal-sentence-encoder/4"
model = hub.load(module_url)
print ("module %s loaded" % module_url)
corpus = list(documents.values())
sentence_embeddings = model(corpus)
query = "who is profiecient in C++ and has Rust"
query_vec = model([query.lower()])[0]
doc_names = list(documents.keys())
results = []
for i,sent in enumerate(corpus):
sim = cosine(query_vec, model([sent])[0])
results.append((i,sim))
#print("Document = ", doc_name[i], "; similarity = ", sim)
print(results)
results= sorted(results, key=lambda x: x[1], reverse=True)
print(results)
for idx, distance in results[:5]:
print(doc_names[idx].strip(), "(Cosine Score: %.4f)" % (distance))
</code></pre> | 2020-12-02 12:02:44.687000+00:00 | 2021-03-13 15:02:55.287000+00:00 | 2020-12-02 14:17:50.563000+00:00 | python|bert-language-model|language-model | ['https://github.com/allenai/longformer', 'https://arxiv.org/abs/2004.05150'] | 2 |
71,712,939 | <p>Both your code and the code in <a href="https://stackoverflow.com/a/71699969/4609915">the answer of @TessellatingHacker</a> lose <a href="/questions/tagged/logical-purity" class="post-tag" title="show questions tagged 'logical-purity'" rel="tag">logical-purity</a> when the arguments of <code>minset_one/3</code> are not sufficiently instantiated:</p>
<pre>
?- D1 = [X,Y,Z], D2 = [U,V], minset_one(D1,D2,T).
D1 = [<b>1</b>,Y,Z], D2 = [<b>1</b>,V], T = D2
; <b>false</b>. % no more solutions!
</pre>
<p>This is clearly incomplete. There <em>are</em> other solutions. We lost <a href="/questions/tagged/logical-purity" class="post-tag" title="show questions tagged 'logical-purity'" rel="tag">logical-purity</a>.</p>
<p>So, what can we do about this?
Basically, we have two options:</p>
<ol>
<li>check <code>D1</code>, <code>D2</code> and <code>T</code> upfront and throw an <code>instantiation_error</code> when the instantiation is not sufficient.</li>
<li>use building blocks that are better suited for code that preserves <a href="/questions/tagged/logical-purity" class="post-tag" title="show questions tagged 'logical-purity'" rel="tag">logical-purity</a>.</li>
</ol>
<hr />
<p>In this answer I want to show how to realise option number two.</p>
<p>The code is based on <a href="https://stackoverflow.com/a/27358600/4609915"><code>if_/3</code></a> which is the core of <a href="https://arxiv.org/pdf/1607.01590.pdf" rel="nofollow noreferrer"><code>library(reif)</code></a>.
In short, we reify the truth values of relations and use Prolog indexing on these values.</p>
<p>Using SWI-Prolog 8.4.2:</p>
<pre>
?- use_module(<a href="http://www.complang.tuwien.ac.at/ulrich/Prolog-inedit/swi/reif.pl" rel="nofollow noreferrer">library(reif)</a>).
</pre>
<p>First, <code>shorter_than_t(Xs,Ys,T)</code>
reifies <em>"list <code>Xs</code> is shorter than <code>Ys</code>"</em> into <code>T</code>:</p>
<pre>
shorter_than_t([],Ys,T) :-
aux_nil_shorter_than(Ys,T).
shorter_than_t([_|Xs],Ys,T) :-
aux_cons_shorter_than_t(Ys,Xs,T).
aux_nil_shorter_than_t([],false).
aux_nil_shorter_than_t([_|_],true).
aux_cons_shorter_than_t([],_,false).
aux_cons_shorter_than_t([_|Ys],Xs,T) :-
shorter_than_t(Xs,Ys,T).
</pre>
<p>Based on <code>shorter_than_t/3</code> we define <code>minset_one/3</code>:</p>
<pre>
minset_one(D1,D2,T) :-
if_(shorter_than_t(D1,D2),
if_(memberd_t(1,D1), D1=T, (memberd_t(1,D2,true),D2=T)),
if_(memberd_t(1,D2), D2=T, (memberd_t(1,D1,true),D1=T))).
</pre>
<p>Now let's run above query again:</p>
<pre>
?- D1 = [X,Y,Z], D2 = [U,V], minset_one(D1,D2,T).
D1 = [X,Y,Z], D2 = [<b>1</b>,V], T = D2
; D1 = [X,Y,Z], D2 = [U,<b>1</b>], T = D2, dif(U,1)
; D1 = [<b>1</b>,Y,Z], D2 = [U,V], T = D1, dif(U,1), dif(V,1)
; D1 = [X,<b>1</b>,Z], D2 = [U,V], T = D1, dif(U,1), dif(V,1), dif(X,1)
; D1 = [X,Y,<b>1</b>], D2 = [U,V], T = D1, dif(U,1), dif(V,1), dif(X,1), dif(Y,1)
; false.
</pre>
<p>At last, <code>minset_one/3</code> has become complete!</p> | 2022-04-01 21:31:07.040000+00:00 | 2022-04-02 12:06:37.337000+00:00 | 2022-04-02 12:06:37.337000+00:00 | null | 71,696,650 | <p>In SWI-Prolog I want to establish the list L from two lists <strong>L1</strong> and <strong>L2</strong> with the smallest count of elements under the condition, that <code>1 ∈ L1</code> and <code>1 ∈ L2</code>.
If <code>1 ∉ L1</code> and <code>1 ∈ L2</code>, then <code>L = L1</code>. If <code>1 ∈ L1</code> and <code>1 ∉ L2</code>, then <code>L = L2</code>. If <code>1 ∉ L1</code> and <code>1 ∉ L2</code>, then the predicate returns false.</p>
<p>I could evaluate this in Prolog with the following conditions:</p>
<pre><code>minset_one(D1, D2, T) :- ((member(1, D1), not(member(1, D2))) -> T=D1).
minset_one(D1, D2, T) :- ((not(member(1, D1)), member(1, D2)) -> T=D2).
minset_one(D1, D2, T) :- (member(1, D1), member(1, D2), length(D1,L1), length(D2,L2), L1 >= L2) -> T=D2.
minset_one(D1, D2, T) :- (member(1, D1), member(1, D2), length(D1,L1), length(D2,L2), L2 > L1) -> T=D1.
</code></pre>
<p>My problem with that function is, the member function is called very often. Is their a way to reduce the complexity of that predicate in that way, the functions</p>
<ul>
<li><code>member(1, D1)</code></li>
<li><code>member(1, D2)</code></li>
<li><code>length(D1, L1)</code></li>
<li><code>length(D2, L2)</code></li>
</ul>
<p>are called only one time?</p> | 2022-03-31 17:10:15.293000+00:00 | 2022-04-02 12:06:37.337000+00:00 | 2022-03-31 19:41:44.993000+00:00 | prolog|min|member | ['https://stackoverflow.com/a/71699969/4609915', '/questions/tagged/logical-purity', '/questions/tagged/logical-purity', '/questions/tagged/logical-purity', 'https://stackoverflow.com/a/27358600/4609915', 'https://arxiv.org/pdf/1607.01590.pdf', 'http://www.complang.tuwien.ac.at/ulrich/Prolog-inedit/swi/reif.pl'] | 7 |
53,673,327 | <p>You can either do 1) Object Detection 2) Semantic Segmentation. I would suggest segmentation because boundary extraction is crucial for your application.</p>
<p>I'm assuming you have the pages of the documents as images.</p>
<p>The following are the steps involved in projects involving segmentation. </p>
<h3>Dataset</h3>
<ol>
<li>Collect the images of the pages required to solve you problem and do
preprocessing steps such as image resizing to bring all images in
your dataset to a common shape and to reduce the number of computations performed. Be sure to maintain variability in your samples.</li>
<li>Now you have to annotate the regions of the images that you are interested and mark them with a name. Here assigning a class (like classification) to certain regions of the image. You can use the following tools for this.</li>
</ol>
<p><a href="https://github.com/wkentaro/labelme" rel="nofollow noreferrer">Labelme</a> -- (my recommendation) </p>
<p><a href="http://www.robots.ox.ac.uk/~vgg/software/via/" rel="nofollow noreferrer">Vgg Annotation tool</a> -- (highly portable tool written in html but has less features than labelme)</p>
<h3>Model</h3>
<p>You can use U-Net Model for your task. <a href="https://arxiv.org/pdf/1505.04597.pdf" rel="nofollow noreferrer">Unet Paper</a>. It is very easy to implement but performs very robustly on most real-world tasks such as yours.</p>
<p>We have done something similar at work. This is the <a href="https://labs.imaginea.com/post/measuring-feet-using-deep-learning/." rel="nofollow noreferrer">blog post</a>. We have explained in detail the steps involved in the pipe line from the data collection stage to the results.</p>
<h3>Literature on Document Layout Analysis.</h3>
<ol>
<li><a href="https://arxiv.org/pdf/1804.10371.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1804.10371.pdf</a> -- They have used U-Net with ResNet-50 as encoder. They have achieved very good results compared to previous approaches</li>
<li><a href="https://github.com/leonlulu/DeepLayout--" rel="nofollow noreferrer">https://github.com/leonlulu/DeepLayout--</a> This is a Python implementation of page layout analysis tool using a Deep Lab v2 model which does semantic segmentation.</li>
</ol>
<h3>Conclusion</h3>
<p>The approach presented here might seem tedious and time consuming but it is robust to variability in the documents when you are testing. Comment below if you have any questions.</p> | 2018-12-07 16:22:20.250000+00:00 | 2018-12-07 16:22:20.250000+00:00 | null | null | 53,601,859 | <p>I have image of text document. It includes text and block-schemes. The main problem is to detect block-schemes. I think there are two approaches to solve this task: 1) detect geometric primitive that make up the scheme; 2) detect the whole scheme.</p>
<p>How can I solve this task, please, give me some aproaches.</p>
<p><strong><em>UPDATE 1</em></strong>
I try to detect where in document block-scheme is placed. Example is shown on the picture below. I didn't try to detect text in block-scheme.</p>
<p><strong><em>UPDATE 2</em></strong> The main problem is that i should find block-schemes in different varieties. Even part of the block-scheme.</p>
<p><a href="https://i.stack.imgur.com/4jXAm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4jXAm.png" alt="Example image"></a></p> | 2018-12-03 21:05:53.310000+00:00 | 2018-12-10 10:11:44.377000+00:00 | 2018-12-10 10:11:44.377000+00:00 | opencv|computer-vision|object-detection | ['https://github.com/wkentaro/labelme', 'http://www.robots.ox.ac.uk/~vgg/software/via/', 'https://arxiv.org/pdf/1505.04597.pdf', 'https://labs.imaginea.com/post/measuring-feet-using-deep-learning/.', 'https://arxiv.org/pdf/1804.10371.pdf', 'https://github.com/leonlulu/DeepLayout--'] | 6 |
48,579,925 | <p>You can try using <a href="http://man7.org/linux/man-pages/man2/mincore.2.html" rel="nofollow noreferrer"><code>mincore(2)</code></a></p>
<p>This will be non-thread safe, unfortunately. Another thread might allocate that region after you check region status, but before you execute <code>mmap</code>.</p>
<p>If you need to reserve a memory area, just create anonymous private mapping with <code>PROT_NONE</code>. Later you can put different mappings on top of it using <code>MAP_FIXED</code>.</p>
<p>EDIT: Looks like <code>mincore</code> behavior is going to <a href="https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=574823bfab82d9d8fa47f422778043fbb4b4f50e" rel="nofollow noreferrer">change</a> in Linux 5.0 due to the fact that it can cause information leak (<a href="http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5489" rel="nofollow noreferrer">CVE-2019-5489</a>):</p>
<blockquote>
<p>So let's try to avoid that information leak by simply changing the
semantics to be that mincore() counts actual mapped pages, not pages
that might be cheaply mapped if they were faulted (note the "might be"
part of the old semantics: being in the cache doesn't actually
guarantee that you can access them without IO anyway, since things
like network filesystems may have to revalidate the cache before use).</p>
</blockquote>
<p>Vulnerability description can be found <a href="https://arxiv.org/pdf/1901.01161.pdf" rel="nofollow noreferrer">here</a>.</p> | 2018-02-02 09:49:00.530000+00:00 | 2019-01-10 09:20:49.587000+00:00 | 2019-01-10 09:20:49.587000+00:00 | null | 48,578,642 | <p>I really want to reserve a specific set of memory locations with the <code>MAP_FIXED</code> option to <code>mmap</code>. However, by default <code>mmap</code> will <code>munmap</code> anything already at those addresses, which would be disastrous.</p>
<p>How can I tell <code>mmap</code> to "reserve the memory at this address, but fail if it is already in use"?</p> | 2018-02-02 08:30:24.897000+00:00 | 2019-01-10 09:20:49.587000+00:00 | 2018-02-02 16:59:00.857000+00:00 | posix|mmap | ['http://man7.org/linux/man-pages/man2/mincore.2.html', 'https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=574823bfab82d9d8fa47f422778043fbb4b4f50e', 'http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5489', 'https://arxiv.org/pdf/1901.01161.pdf'] | 4 |
14,640,998 | <p>As I can see, this a variation of mastermind board game <a href="http://en.m.wikipedia.org/wiki/Mastermind_(board_game)" rel="nofollow">http://en.m.wikipedia.org/wiki/Mastermind_(board_game)</a></p>
<p>Also, you can find more details about the problem in this paper</p>
<p><a href="http://arxiv.org/abs/cs.CC/0512049" rel="nofollow">http://arxiv.org/abs/cs.CC/0512049</a></p> | 2013-02-01 06:34:34.063000+00:00 | 2013-02-01 06:34:34.063000+00:00 | null | null | 14,640,139 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://stackoverflow.com/questions/1185634/how-to-solve-the-mastermind-guessing-game">How to solve the “Mastermind” guessing game?</a> </p>
</blockquote>
<p>I have to choose <code>k</code> items out of <code>n</code> choices, and my selection needs to be in the correct order (i.e. permutation, not combination). After I make a choice, I receive a hint that tells me how many of my selections were correct, and how many were in the correct order.</p>
<p>For example, if I'm trying to choose <code>k=4</code> out of <code>n=6</code> items, and the correct ordered set is <code>5, 3, 1, 2</code>, then an exchange may go as follows:</p>
<pre><code>0,1,2,3
(3, 0) # 3 correct, 0 in the correct position
0,1,2,5
(3, 0)
0,1,5,3
(3, 0)
0,5,2,3
(3,0)
5,1,2,3
(4,1)
5,3,1,2
(4,4)
-> correct order, the game is over
</code></pre>
<p>The problem is I'm only given a limited number of tries to get the order right, so if <code>n=6, k=4</code>, then I only get <code>t=6</code> tries, if <code>n=10,k=5</code> then <code>t=5</code>, and if <code>n=35,k=6</code> then <code>t=18</code>.</p>
<p><strong>Where do I start to write an algorithm that solves this?</strong> It almost seems like a constraint solving problem. The hard part seems to be that I only know something for sure if I only change 1 thing at once, but the upper bound on that is way more than the number of tries I get.</p> | 2013-02-01 05:10:55.447000+00:00 | 2013-02-01 06:57:19.423000+00:00 | 2017-05-23 12:27:01.490000+00:00 | algorithm|combinatorics | ['http://en.m.wikipedia.org/wiki/Mastermind_(board_game)', 'http://arxiv.org/abs/cs.CC/0512049'] | 2 |
63,912,094 | <p>A regular expression along with <strong>findall()</strong> method can be used for finding all the intersting links form the given html content.</p>
<p>BeautifulSoup offers an easy way to read table from html.</p>
<p>The above goal of reading pdf links form a table inside a given html content can be achieved by using regex along with BeautifulSoup.</p>
<p><strong>Working example using regex and along with BeatifulSoup</strong></p>
<pre><code># File name: find-pdf-links.py
import re
from bs4 import BeautifulSoup
htmlContent = """
<h3>Three-way classification</h3>
...
<td><a href="http://nlp.stanford.edu/pubs/snli_paper.pdf">
...
<td><a href="http://nlp.stanford.edu/pubs/snli_paper.pdf">
....
<td><a href="http://nlp.stanford.edu/pubs/snli_paper.pdf">
......
<td><a href="https://www.nyu.edu/projects/bowman/spinn.pdf">
...
<td><a href="http://arxiv.org/pdf/1511.06361v3.pdf">
"""
# Read webpage
webPage = BeautifulSoup(htmlContent)
# Read table form the webpage
tableOfLinks = webPage.find("table")
print("PDF links:")
for link in tableOfLinks.findAll('a', attrs={'href': re.compile("^http://.*pdf$")}):
print(link.get('href'))
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>> python find-pdf-links.py
PDF links:
http://nlp.stanford.edu/pubs/snli_paper.pdf
http://nlp.stanford.edu/pubs/snli_paper.pdf
http://nlp.stanford.edu/pubs/snli_paper.pdf
http://arxiv.org/pdf/1511.06361v3.pdf
</code></pre>
<p><strong>More information:</strong></p>
<p><a href="https://www.w3schools.com/python/python_regex.asp" rel="nofollow noreferrer">https://www.w3schools.com/python/python_regex.asp</a></p>
<p><a href="https://www.geeksforgeeks.org/python-check-url-string/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/python-check-url-string/</a></p> | 2020-09-16 02:01:38.063000+00:00 | 2020-09-16 02:52:43.050000+00:00 | 2020-09-16 02:52:43.050000+00:00 | null | 63,911,855 | <h1>Problem</h1>
<p>I am now moving into a natural language processing projects. Before I get my hands dirty, I plan to read other people's works on dataset, where they are organized as a <a href="https://nlp.stanford.edu/projects/snli/" rel="nofollow noreferrer">leaderboard</a> (see "Three-way Classification" section).</p>
<p>However, in order to download these papers, I need to manually click on each URL (there are about 50 of them), which is time-consuming. Therefore, I am trying to extract these URLs from HTML, which looks like following:</p>
<pre><code><h3>Three-way classification</h3>
<blockquote>
<table class="newstuff">
<tr class="header">
<th>Publication</th>
<th>&nbsp;Model</th>
<th>Parameters</th>
<th>&nbsp;Train (% acc)</th>
<th>&nbsp;Test (% acc)</th>
</tr>
<tr class="section">
<th colspan="5" style="background-color:transparent; color:#646464;">Feature-based models</th>
</tr>
<tr>
<td><a href="http://nlp.stanford.edu/pubs/snli_paper.pdf">Bowman et al. '15</a></td>
<td>Unlexicalized features</td>
<td></td>
<td style="text-align: right">49.4</td>
<td style="text-align: right">50.4</td>
</tr>
<tr>
<td><a href="http://nlp.stanford.edu/pubs/snli_paper.pdf">Bowman et al. '15</a></td>
<td>+ Unigram and bigram features</td>
<td></td>
<td style="text-align: right">99.7</td>
<td style="text-align: right"><em>78.2</em></td>
</tr>
<tr class="section">
<th colspan="5" style="background-color:transparent; color:#646464;">Sentence vector-based models</th>
</tr>
<tr>
<td><a href="http://nlp.stanford.edu/pubs/snli_paper.pdf">Bowman et al. '15</a></td>
<td>100D LSTM encoders</td>
<td style="text-align: right">220k</td>
<td style="text-align: right">84.8</td>
<td style="text-align: right">77.6</td>
</tr>
<tr>
<td><a href="https://www.nyu.edu/projects/bowman/spinn.pdf">Bowman et al. '16</a></td>
<td>300D LSTM encoders</td>
<td style="text-align: right">3.0m</td>
<td style="text-align: right">83.9</td>
<td style="text-align: right">80.6</td>
</tr>
<tr>
<td><a href="http://arxiv.org/pdf/1511.06361v3.pdf">Vendrov et al. '15</a></td>
<td>1024D GRU encoders w/ unsupervised 'skip-thoughts' pre-training</td>
<td style="text-align: right">15m</td>
<td style="text-align: right">98.8</td>
<td style="text-align: right">81.4</td>
</tr>
...
</code></pre>
<p>I know I could use <code>requests</code> and <code>bs4.BeautifulSoup</code> to download and parse this page. But I could not figure out a way to extract URLs because there is not an easy to to pinpoint each individual row (there are other URLs outside of the table, so I could not say any URL extracted from the HTML is what I want).</p>
<p>Could anyone help me? Thank you in advance.</p>
<h1>Update</h1>
<p>The main difficulty is to extract URLs <strong>only</strong> from the leaderboard, which is tagged as</p>
<pre><code><h3>Three-way classification</h3>
<blockquote>
<table class="newstuff">
...
</table>
</blockquote>
</code></pre>
<p>Before and after this leaderboard, there are many contexts that is irrelevant to my purpose, where there are also a lot of URLs.</p> | 2020-09-16 01:21:10.527000+00:00 | 2020-09-16 02:52:43.050000+00:00 | 2020-09-16 02:10:18.870000+00:00 | html|beautifulsoup | ['https://www.w3schools.com/python/python_regex.asp', 'https://www.geeksforgeeks.org/python-check-url-string/'] | 2 |
37,046,639 | <p>First, for those without IEEE Digital Library access, here is a link to the Arxiv PDF of this research: <a href="http://arxiv.org/pdf/1412.0880v1.pdf" rel="nofollow noreferrer">http://arxiv.org/pdf/1412.0880v1.pdf</a></p>
<p>The Wi-Fi Direct specification allows for a legacy device (i.e. a device without Wi-Fi Direct) to connect to a Wi-Fi Direct GO using its Wi-Fi interface. The authors of this research have used this to allow a GO to be a client in another group. So the GO has clients on the P2P interface and also connects to another GO using its legacy Wi-Fi interface.</p>
<p>To implement this, you will need to do the following:</p>
<ol>
<li>Allow GOs to acquire their Wi-Fi Direct group passphrase/key.</li>
<li>Distribute the passphrase securely to other GOs.</li>
<li>Allow GOs to using a legacy Wi-Fi connection to connect to other GOs.</li>
</ol>
<p>As the paper describes, there will be IP address conflicts, so messaging between all pairs of devices will not be possible at the IP layer, e.g. the client of one GO will not be able to communicate with the client of another. To overcome this, you will need to implement a messaging layer at the application layer.</p>
<p>First, from the documentation, we know that we can start a P2P Group that can accept legacy connections using the <code>WifiP2pManager.createGroup (WifiP2pManager.Channel c, WifiP2pManager.ActionListener listener)</code> method and its details can be fetched using <code>WifiP2pManager.requestGroupInfo (WifiP2pManager.Channel c, WifiP2pManager.GroupInfoListener listener)</code>. The <code>onGroupInfoAvailable(WifiP2pGroup group)</code> method of <code>GroupInfoListener</code> allows us to access a <code>WifiP2pGroup</code> object that represents the group. <code>WifiP2pGroup.getPassphrase()</code> will retrieve the group's passphrase. Now that we have the passphrase, we can distribute this to other GOs that wish to connect to this group's GO by Wi-Fi.</p>
<pre><code>wifiP2pManager.requestGroupInfo(channel,
new WifiP2pManager.GroupInfoListener() {
@Override
public void onGroupInfoAvailable(WifiP2pGroup group) {
if(group != null){
// clients require these
String ssid = group.getNetworkName(),
String passphrase = group.getPassphrase()
}
}
});
</code></pre>
<p>Having distributed the passsphrase, a GO can connect to another GO programatically, as described in the answer to <a href="https://stackoverflow.com/questions/8818290/how-to-connect-to-a-specific-wifi-network-in-android-programmatically">How to connect to a specific wifi network in Android programmatically?</a>.</p> | 2016-05-05 09:08:23.427000+00:00 | 2016-05-05 09:08:23.427000+00:00 | 2017-05-23 12:16:12.840000+00:00 | null | 36,917,758 | <p>I have read an article named "Content-centric Routing in wifi direct multi-group networks",in this article,it told us the method to implement inter-group communication ,but I couldn't implement it with program in android device ,if some one who has interest in this problem ,please contact me!!!!!</p> | 2016-04-28 14:31:59.437000+00:00 | 2016-05-05 09:08:23.427000+00:00 | null | wifi-direct | ['http://arxiv.org/pdf/1412.0880v1.pdf', 'https://stackoverflow.com/questions/8818290/how-to-connect-to-a-specific-wifi-network-in-android-programmatically'] | 2 |
60,406,041 | <p>A possible approach is to use the EAST (Efficient and Accurate Scene Text) deep learning text detector based on Zhou et al.’s 2017 paper, <a href="https://arxiv.org/abs/1704.03155" rel="noreferrer"><em>EAST: An Efficient and Accurate Scene Text Detector</em></a>. The model was originally trained for detecting text in natural scene images but it may be possible to apply it on diagram images. EAST is quite robust and is capable of detecting blurred or reflective text. Here is a modified version of <a href="https://www.pyimagesearch.com/2018/08/20/opencv-text-detection-east-text-detector/" rel="noreferrer">Adrian Rosebrock's implementation of EAST</a>. Instead of applying the text detector directly on the image, we can try to remove as much non-text objects on the image before performing text detection. The idea is to remove horizontal lines, vertical lines, and non-text contours (curves, diagonals, circular shapes) before applying detection. Here's the results with some of your images:</p>
<p>Input <code>-></code> Non-text contours to remove in green</p>
<p><img src="https://i.stack.imgur.com/iVZzH.png" width="325">
<img src="https://i.stack.imgur.com/Gdxrz.png" width="325"></p>
<p>Result</p>
<p><img src="https://i.stack.imgur.com/nl7Y2.png" width="325"></p>
<p>Other images</p>
<p><img src="https://i.stack.imgur.com/OyupG.png" width="325">
<img src="https://i.stack.imgur.com/z4fMr.png" width="325"></p>
<p><img src="https://i.stack.imgur.com/BJe6v.png" width="325"></p>
<p><img src="https://i.stack.imgur.com/Gl6kR.png" width="325">
<img src="https://i.stack.imgur.com/B79Xa.png" width="325"></p>
<p><img src="https://i.stack.imgur.com/fYzq6.png" width="325"></p>
<p>The pretrained <code>frozen_east_text_detection.pb</code> model necessary to perform text detection can be <a href="https://www.kaggle.com/yelmurat/frozen-east-text-detection" rel="noreferrer">found here</a>. Although the model catches most of the text, the results are not 100% accurate and has occasional false positives probably due to how it was trained on natural scene images. To obtain more accurate results you would probably have to train your own custom model. But if you want a decent out-of-the-box solution then this should work you. Check out Adrian's <a href="https://www.pyimagesearch.com/2018/08/20/opencv-text-detection-east-text-detector/" rel="noreferrer">OpenCV Text Detection (EAST text detector)</a> blog post for a more comprehensive explanation of the EAST text detector.</p>
<p>Code</p>
<pre><code>from imutils.object_detection import non_max_suppression
import numpy as np
import cv2
def EAST_text_detector(original, image, confidence=0.25):
# Set the new width and height and determine the changed ratio
(h, W) = image.shape[:2]
(newW, newH) = (640, 640)
rW = W / float(newW)
rH = h / float(newH)
# Resize the image and grab the new image dimensions
image = cv2.resize(image, (newW, newH))
(h, W) = image.shape[:2]
# Define the two output layer names for the EAST detector model that
# we are interested -- the first is the output probabilities and the
# second can be used to derive the bounding box coordinates of text
layerNames = [
"feature_fusion/Conv_7/Sigmoid",
"feature_fusion/concat_3"]
net = cv2.dnn.readNet('frozen_east_text_detection.pb')
# Construct a blob from the image and then perform a forward pass of
# the model to obtain the two output layer sets
blob = cv2.dnn.blobFromImage(image, 1.0, (W, h), (123.68, 116.78, 103.94), swapRB=True, crop=False)
net.setInput(blob)
(scores, geometry) = net.forward(layerNames)
# Grab the number of rows and columns from the scores volume, then
# initialize our set of bounding box rectangles and corresponding
# confidence scores
(numRows, numCols) = scores.shape[2:4]
rects = []
confidences = []
# Loop over the number of rows
for y in range(0, numRows):
# Extract the scores (probabilities), followed by the geometrical
# data used to derive potential bounding box coordinates that
# surround text
scoresData = scores[0, 0, y]
xData0 = geometry[0, 0, y]
xData1 = geometry[0, 1, y]
xData2 = geometry[0, 2, y]
xData3 = geometry[0, 3, y]
anglesData = geometry[0, 4, y]
# Loop over the number of columns
for x in range(0, numCols):
# If our score does not have sufficient probability, ignore it
if scoresData[x] < confidence:
continue
# Compute the offset factor as our resulting feature maps will
# be 4x smaller than the input image
(offsetX, offsetY) = (x * 4.0, y * 4.0)
# Extract the rotation angle for the prediction and then
# compute the sin and cosine
angle = anglesData[x]
cos = np.cos(angle)
sin = np.sin(angle)
# Use the geometry volume to derive the width and height of
# the bounding box
h = xData0[x] + xData2[x]
w = xData1[x] + xData3[x]
# Compute both the starting and ending (x, y)-coordinates for
# the text prediction bounding box
endX = int(offsetX + (cos * xData1[x]) + (sin * xData2[x]))
endY = int(offsetY - (sin * xData1[x]) + (cos * xData2[x]))
startX = int(endX - w)
startY = int(endY - h)
# Add the bounding box coordinates and probability score to
# our respective lists
rects.append((startX, startY, endX, endY))
confidences.append(scoresData[x])
# Apply non-maxima suppression to suppress weak, overlapping bounding
# boxes
boxes = non_max_suppression(np.array(rects), probs=confidences)
# Loop over the bounding boxes
for (startX, startY, endX, endY) in boxes:
# Scale the bounding box coordinates based on the respective
# ratios
startX = int(startX * rW)
startY = int(startY * rH)
endX = int(endX * rW)
endY = int(endY * rH)
# Draw the bounding box on the image
cv2.rectangle(original, (startX, startY), (endX, endY), (36, 255, 12), 2)
return original
# Convert to grayscale and Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
clean = thresh.copy()
# Remove horizontal lines
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15,1))
detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(clean, [c], -1, 0, 3)
# Remove vertical lines
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,30))
detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2)
cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(clean, [c], -1, 0, 3)
# Remove non-text contours (curves, diagonals, circlar shapes)
cnts = cv2.findContours(clean, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area > 1500:
cv2.drawContours(clean, [c], -1, 0, -1)
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
x,y,w,h = cv2.boundingRect(c)
if len(approx) == 4:
cv2.rectangle(clean, (x, y), (x + w, y + h), 0, -1)
# Bitwise-and with original image to remove contours
filtered = cv2.bitwise_and(image, image, mask=clean)
filtered[clean==0] = (255,255,255)
# Perform EAST text detection
result = EAST_text_detector(image, filtered)
cv2.imshow('filtered', filtered)
cv2.imshow('result', result)
cv2.waitKey()
</code></pre> | 2020-02-26 02:48:18.147000+00:00 | 2020-02-26 02:48:18.147000+00:00 | null | null | 60,275,455 | <p>I have multiple images diagram, all of which contains labels as alphanumeric characters instead of just the text label itself. I want my YOLO model to identify all the numbers & alphanumeric characters present in it.</p>
<p>How can I train my YOLO model to do the same. The dataset can be found here. <a href="https://drive.google.com/open?id=1iEkGcreFaBIJqUdAADDXJbUrSj99bvoi" rel="noreferrer">https://drive.google.com/open?id=1iEkGcreFaBIJqUdAADDXJbUrSj99bvoi</a></p>
<p>For example : see the bounding boxes. I want YOLO to detect wherever the text are present. However currently its not necessary to identify the text inside it.</p>
<p><a href="https://i.stack.imgur.com/kckxb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/kckxb.png" alt="enter image description here"></a></p>
<p>Also the same needs to be done for these type of images
<a href="https://i.stack.imgur.com/KKsO2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KKsO2.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/8Q2TO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8Q2TO.png" alt="enter image description here"></a></p>
<p>The images can be downloaded <a href="https://drive.google.com/open?id=1iEkGcreFaBIJqUdAADDXJbUrSj99bvoi" rel="noreferrer">here</a></p>
<p>This is what I have tried using opencv but it does not work for all the images in the dataset.</p>
<pre><code>import cv2
import numpy as np
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r"C:\Users\HPO2KOR\AppData\Local\Tesseract-OCR\tesseract.exe"
image = cv2.imread(r'C:\Users\HPO2KOR\Desktop\Work\venv\Patent\PARTICULATE DETECTOR\PD4.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
clean = thresh.copy()
horizontal_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (15,1))
detect_horizontal = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, horizontal_kernel, iterations=2)
cnts = cv2.findContours(detect_horizontal, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(clean, [c], -1, 0, 3)
vertical_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (1,30))
detect_vertical = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, vertical_kernel, iterations=2)
cnts = cv2.findContours(detect_vertical, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
cv2.drawContours(clean, [c], -1, 0, 3)
cnts = cv2.findContours(clean, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
if area < 100:
cv2.drawContours(clean, [c], -1, 0, 3)
elif area > 1000:
cv2.drawContours(clean, [c], -1, 0, -1)
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
x,y,w,h = cv2.boundingRect(c)
if len(approx) == 4:
cv2.rectangle(clean, (x, y), (x + w, y + h), 0, -1)
open_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2))
opening = cv2.morphologyEx(clean, cv2.MORPH_OPEN, open_kernel, iterations=2)
close_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,2))
close = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, close_kernel, iterations=4)
cnts = cv2.findContours(close, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
x,y,w,h = cv2.boundingRect(c)
area = cv2.contourArea(c)
if area > 500:
ROI = image[y:y+h, x:x+w]
ROI = cv2.GaussianBlur(ROI, (3,3), 0)
data = pytesseract.image_to_string(ROI, lang='eng',config='--psm 6')
if data.isalnum():
cv2.rectangle(image, (x, y), (x + w, y + h), (36,255,12), 2)
print(data)
cv2.imwrite('image.png', image)
cv2.imwrite('clean.png', clean)
cv2.imwrite('close.png', close)
cv2.imwrite('opening.png', opening)
cv2.waitKey()
</code></pre>
<p>Is there any model or any opencv technique or some pre trained model that can do the same for me?
I just need the bounding boxes around all the alphanumeric characters present in the images. After that I need to identify whats present in it. However the second part is not important currently.</p> | 2020-02-18 07:03:38.037000+00:00 | 2020-02-28 13:35:11.887000+00:00 | 2020-02-26 02:47:49.447000+00:00 | python|opencv|machine-learning|deep-learning|yolo | ['https://arxiv.org/abs/1704.03155', 'https://www.pyimagesearch.com/2018/08/20/opencv-text-detection-east-text-detector/', 'https://www.kaggle.com/yelmurat/frozen-east-text-detection', 'https://www.pyimagesearch.com/2018/08/20/opencv-text-detection-east-text-detector/'] | 4 |
61,222,465 | <p>Batch size affects regularization. Training on a single example at a time is quite noisy, which makes it harder to overfit. Training on batches smoothes everything out, which makes it easier to overfit. Translating back to regularization: </p>
<ul>
<li>Smaller batches add regularization.</li>
<li>Larger batches reduce regularization.</li>
</ul>
<p>I am also curious about your learning rate. Every call to <code>loss.backward()</code> will accumulate the gradient. If you have set your learning rate to expect a single example at a time, and not reduced it to account for batch accumulation, then one of two things will happen.</p>
<ol>
<li><p>The learning rate will be too high for the now-accumulated gradient, training will diverge, and both training and validation errors will explode.</p></li>
<li><p>The learning rate won't be too high, and nothing will diverge. The model will just train more quickly and effectively. If the model is too large for the data being fit, then training error will go to 0 but validation error will explode due to overfitting.</p></li>
</ol>
<hr>
<p><strong>Update</strong></p>
<p>Here is a bit more detail regarding the gradient accumulation.</p>
<p>Every call to <code>loss.backward()</code> will accumulate gradient, until you reset it with <code>optimizer.zero_grad()</code>. It will be acted on when you call <code>optimizer.step()</code>, based on whatever it has accumulated.</p>
<p>The way your code is written, you call <code>loss.backward()</code> for every pass through the inner loop, then you call <code>optimizer.step()</code> in the outer loop before resetting. So the gradient has been accumulated, that is summed, over all examples in the batch and not just one example at a time.</p>
<p>Under most assumptions, that will make the batch-accumulated gradient larger than the gradient for a single example. If the gradients are all aligned, for B batches, it will be larger by B times. If the gradients are <a href="https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables" rel="nofollow noreferrer">i.i.d.</a> then it will be more like <code>sqrt(B)</code> times larger.</p>
<p>If you do not account for this, then you have effectively increased your learning rate by that factor. Some of that will be mitigated by the smoothing effect of larger batches, which can then tolerate a higher learning rate. Larger batches reduce regularization, larger learning rates add it back. But that will not be a perfect match to compensate, so you will still want to adjust accordingly.</p>
<p>In general, whenever you change your batch size you will also want to re-tune your learning rate to compensate.</p>
<hr>
<p><em>Leslie N. Smith</em> has written some excellent papers on a methodical approach to hyperparameter tuning. A great place to start is <a href="https://arxiv.org/abs/1803.09820" rel="nofollow noreferrer">A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay</a>. He recommends you start by reading the diagrams, which are very well done.</p> | 2020-04-15 06:18:32.230000+00:00 | 2020-04-15 21:57:56.943000+00:00 | 2020-04-15 21:57:56.943000+00:00 | null | 61,122,561 | <p>I'm training a sequence to sequence (seq2seq) model and I have different values to train on for the <code>input_sequence_length</code>. </p>
<p>For values <code>10</code> and <code>15</code>, I get acceptable results but when I try to train with <code>20</code>, I get <em>memory errors</em> so I switched the training to train by batches but the model <em>over-fit</em> and the validation loss explodes, and even with the accumulated gradient I get the same behavior, so I'm looking for hints and leads to more accurate ways to do the update.</p>
<hr>
<p>Here is my training function (only with batch section) :</p>
<pre class="lang-py prettyprint-override"><code> if batch_size is not None:
k=len(list(np.arange(0,(X_train_tensor_1.size()[0]//batch_size-1), batch_size )))
for epoch in range(num_epochs):
optimizer.zero_grad()
epoch_loss=0
for i in list(np.arange(0,(X_train_tensor_1.size()[0]//batch_size-1), batch_size )): # by using equidistant batch till the last one it becomes much faster than using the X.size()[0] directly
sequence = X_train_tensor[i:i+batch_size,:,:].reshape(-1, sequence_length, input_size).to(device)
labels = y_train_tensor[i:i+batch_size,:,:].reshape(-1, sequence_length, output_size).to(device)
# Forward pass
outputs = model(sequence)
loss = criterion(outputs, labels)
epoch_loss+=loss.item()
# Backward and optimize
loss.backward()
optimizer.step()
epoch_loss=epoch_loss/k
model.eval
validation_loss,_= evaluate(model,X_test_hard_tensor_1,y_test_hard_tensor_1)
model.train()
training_loss_log.append(epoch_loss)
print ('Epoch [{}/{}], Train MSELoss: {}, Validation : {} {}'.format(epoch+1, num_epochs,epoch_loss,validation_loss))
</code></pre>
<p><strong>EDIT:</strong>
here are the parameters that I'm training with :</p>
<pre class="lang-py prettyprint-override"><code>batch_size = 1024
num_epochs = 25000
learning_rate = 10e-04
optimizer=torch.optim.Adam(model.parameters(), lr=learning_rate)
criterion = nn.MSELoss(reduction='mean')
</code></pre> | 2020-04-09 13:43:09.737000+00:00 | 2020-04-15 21:57:56.943000+00:00 | 2020-04-15 12:39:50.480000+00:00 | python|machine-learning|neural-network|pytorch|training-data | ['https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables', 'https://arxiv.org/abs/1803.09820'] | 2 |
54,971,720 | <p>I think you might want to look at the following: <a href="https://arxiv.org/pdf/1610.02757.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1610.02757.pdf</a></p>
<p>By quoting the authors "Notice that XGBoost does not work with the exact hessian but with its diagonal approximation."</p> | 2019-03-03 17:33:30.333000+00:00 | 2019-03-03 17:33:30.333000+00:00 | null | null | 52,595,782 | <p>I have a binary classification task and use the xgboost package to solve it. Basically, I just use boosted trees to do so. But I am being evaluated on the Brier score, so I thought I would optimize the Brier loss function (defined as the brier score applied on top of logistic classification) which led me to define the gradient and the hessian of the Brier loss like so : </p>
<pre><code>def brier(preds, dtrain):
labels = dtrain.get_label()
preds = 1.0 / (1.0 + np.exp(-preds))
grad = 2*(preds-labels)*preds*(1-preds)
hess = 2*(2*(labels+1)*preds-labels-3*preds*preds)*preds*(1-preds)
return grad, hess
def evalerror(preds, dtrain):
preds = 1.0 / (1.0 + np.exp(-preds))
labels = dtrain.get_label()
errors = (labels - preds)**2
return 'brier-error', float(np.sum(errors)) / len(labels)
param = {'eta':0.01,
'max_depth': 6, # the maximum depth of each tree
#'objective': 'binary:logistic',
'booster' : 'gbtree',
'eval_metric':['rmse', 'auc']}
bst = xgb.train(param,dtrain, num_boost_round=999,early_stopping_rounds=10,obj=brier, feval=evalerror,evals=[(dtrain,'train'),(dtest,'test')])
</code></pre>
<p>The only problem is that by doing so, I get negative values for my prediction on my test set, which suggests that the output of the xgboost model is not the logistic probability as expected.
Does anyone know what I am missing here or if there is a better way to optimize the brier score? </p>
<p>Any help would be really appreciated!! </p>
<p>Thanks,</p> | 2018-10-01 17:10:16.100000+00:00 | 2020-03-30 06:19:30.337000+00:00 | null | python|machine-learning|xgboost | ['https://arxiv.org/pdf/1610.02757.pdf'] | 1 |
19,514,025 | <p>Have you considered <a href="http://arxiv.org/abs/1112.2903" rel="nofollow">Correlation Clustering</a>?<br>
If you read carefully section 2.1 in that paper you'll see a probabilistic interpretation to the recovered number of clusters.</p>
<p>The only modification you need for your <code>M</code> matrix is to set a threshold deciding what distance is considered "same" and what distance is too large and should be considered as "not-same".</p>
<p>Section 7.2 in the aforementioned paper deals with a clustering of a full matrix where the recovering of the underlying number of clusters is an important part of the task at hand.</p> | 2013-10-22 09:30:31.517000+00:00 | 2013-10-22 09:30:31.517000+00:00 | null | null | 18,909,096 | <p>I have a set of objects <code>{obj1, obj2, obj3, ..., objn}</code>. I have calculated the pairwise distances of all possible pairs. The distances are stored in a <code>n*n</code> matrix <code>M</code>, with <code>Mij</code> being the distance between <code>obji</code> and <code>objj</code>. Then it is natural to see <code>M</code> is a symmetric matrix.</p>
<p>Now I wish to perform unsupervised clustering to these objects. After some searching, I find <a href="http://en.wikipedia.org/wiki/Spectral_clustering" rel="noreferrer">Spectral Clustering</a> may be a good candidate, since it deals with such pairwise-distance cases.</p>
<p>However, after carefully reading its description, I find it unsuitable in my case, as <strong>it requires the number of clusters as the input</strong>. Before clustering, I don't know the number of clusters. It has to be figured out by the algorithm while performing the clustering, like DBSCAN.</p>
<p><strong>Considering these, please suggest me some clustering methods that fit my case</strong>, where</p>
<ol>
<li>The pairwise distances are all available.</li>
<li>The number of clusters is unknown.</li>
</ol> | 2013-09-20 04:59:25.170000+00:00 | 2018-01-17 19:16:25.213000+00:00 | null | algorithm|machine-learning|cluster-analysis | ['http://arxiv.org/abs/1112.2903'] | 1 |
71,429,486 | <p>I don't know if you need this anymore. I came here searching for this myself. So here it goes.</p>
<p>I am not so good with torch so please bear with me.</p>
<pre><code>client_model = keras.models.Sequential([keras.layers........., ......])
with tf.GradientTape(persistent=True) as client_tape:
client_pred = client_model(batch_flat)
# Get your gradients from server
grad_from_server = youGradientGetterfunction()
client_gradients = client_tape.gradient(client_pred,
client_model.trainable_weights,
output_gradients=grad_from_server)
</code></pre>
<p>Now you have gradients for every layers in your client side, you can use an optimiser like:</p>
<pre><code>client_opt = tf.keras.optimizers.SGD(learning_rate=0.1)
client_opt.apply_gradients(zip(client_gradients,
client_model.trainable_weights))
</code></pre>
<p>This will apply the calculated gradients to all the layers in the client side model. Or, you may choose to apply the gradients manually using</p>
<blockquote>
<p>w = w - g*lr</p>
</blockquote>
<p>So, there you go, try this out.</p>
<p>Please check out <a href="https://www.tensorflow.org/api_docs/python/tf/GradientTape" rel="nofollow noreferrer">Tensorflow GradientTape</a> → Methods → gradients</p>
<p>I was trying implementing Split-learning <a href="https://arxiv.org/abs/1812.00564" rel="nofollow noreferrer">Split learning for health, Vepakomma et al</a>, which needed this.</p> | 2022-03-10 19:05:23.443000+00:00 | 2022-03-10 19:05:23.443000+00:00 | null | null | 63,832,644 | <p>I'm trying to implement a split learning model, where my TF model on a client takes in the data and produces an intermediate output. This intermediate output will be sent to a server running the Pytorch model that will take it in as input and minimize the loss. Then, my server will send back the client gradients to the TF model for the TF model to update its weights.</p>
<p>How do I get my TF model to update its weights with the gradients sent back from the server?</p>
<pre><code># pytorch client
client_output.backward(client_grad)
optimizer.step()
</code></pre>
<p>With PyTorch, I can just do a <code>client_pred.backward(client_grad)</code> and <code>client_optimizer.step()</code>.</p>
<p>How do I achieve the same with a Tensorflow client? I've tried GradientTape with <code>tape.gradient(client_grad, model.trainable_weights)</code> but it just gives me None. I think it's because there's no computation in the tape context and client_grad is just a Tensor holding the gradients and is not connected to the model's layers?</p>
<p>Is there some way I can do this with tf's <code>apply_gradients</code>() or <code>compute_gradients</code>()?</p>
<p>I only have the gradients for the client's last layer (sent by server). I'm trying to compute all the gradients for the client and update the weights.</p>
<p>Thank you.</p>
<pre><code>
class TensorflowModel(tf.keras.Model):
def __init__(self, D_in, H, D_out):
super(TensorflowModel, self).__init__()
self.d1 = Dense(H, activation='relu', input_shape=(D_in,))
self.d2 = Dense(D_out)
def call(self, x):
x = self.d1(x)
return self.d2(x)
tensorflowModel = TensorflowModel(D_in, H, D_out)
tensorflowOptimizer = tf.optimizers.Adam(lr=1e-4)
serverModel = torch.nn.Sequential(
torch.nn.Linear(10, 50),
torch.nn.ReLU(),
torch.nn.Linear(50, 10)
)
loss_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(serverModel.parameters(), lr=1e-4)
for t in range(N):
// let x be minibatch
// let y be labels of minibatch
client_pred = tensorflowModel(x)
client_output = torch.from_numpy(client_pred.numpy())
client_output.requires_grad = True
y_pred = serverModel(client_output)
loss = loss_fn(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step() // update server weights
// now retrieve client grad for last layer
client_grad = client_output.grad.detach().clone().numpy()
client_grad = tf.convert_to_tensor(client_grad) // change to tf tensor
// now compute all client gradients and update client weights
// HOW DO I DO THIS?
</code></pre>
<p>How should I update the client weights? If the client was a pytorch model I could just do client_pred.backward(client_grad) and client_optimizer.step(). I'm not sure how to use the gradient tape to calculate gradients, since client_grad was computed on the server and was a pytorch tensor that's converted to a tf tensor.</p> | 2020-09-10 15:11:35.620000+00:00 | 2022-03-10 19:05:23.443000+00:00 | 2020-09-11 03:10:32.830000+00:00 | python|tensorflow|machine-learning|pytorch|tensorflow2.0 | ['https://www.tensorflow.org/api_docs/python/tf/GradientTape', 'https://arxiv.org/abs/1812.00564'] | 2 |
50,550,236 | <p>I think the answer you are looking for is described in the 2015 paper <a href="https://arxiv.org/pdf/1508.02297.pdf" rel="noreferrer">Measuring Word Significance
using
Distributed Representations of Words</a> by Adriaan Schakel and Benjamin Wilson. The key points:</p>
<blockquote>
<p>When a word appears
in different contexts, its vector gets moved in
different directions during updates. The final vector
then represents some sort of weighted average
over the various contexts. Averaging over vectors
that point in different directions typically results in
a vector that gets shorter with increasing number
of different contexts in which the word appears.
For words to be used in many different contexts,
they must carry little meaning. Prime examples of
such insignificant words are high-frequency stop
words, which are indeed represented by short vectors
despite their high term frequencies ...</p>
</blockquote>
<hr>
<blockquote>
<p>For given term frequency,
the vector length is seen to take values only in a
narrow interval. That interval initially shifts upwards
with increasing frequency. Around a frequency
of about 30, that trend reverses and the interval
shifts downwards.</p>
<p>...</p>
<p>Both forces determining the length of a word
vector are seen at work here. Small-frequency
words tend to be used consistently, so that the
more frequently such words appear, the longer
their vectors. This tendency is reflected by the upwards
trend in Fig. 3 at low frequencies. High-frequency
words, on the other hand, tend to be
used in many different contexts, the more so, the
more frequently they occur. The averaging over
an increasing number of different contexts shortens
the vectors representing such words. This tendency
is clearly reflected by the downwards trend
in Fig. 3 at high frequencies, culminating in punctuation
marks and stop words with short vectors at
the very end.</p>
<p>...</p>
<p><a href="https://i.stack.imgur.com/NI9je.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NI9je.png" alt="Graph showing the trend described in the previous excerpt"></a></p>
<p>Figure 3: Word vector length <em>v</em> versus term frequency
<em>tf</em> of all words in the hep-th vocabulary.
Note the logarithmic scale used on the frequency
axis. The dark symbols denote bin means with the
<i>k</i>th bin containing the frequencies in the interval
[2<sup><i>k−1</i></sup>, 2<sup><i>k</i></sup> − 1] with <em>k</em> = 1, 2, 3, . . .. These means
are included as a guide to the eye. The horizontal
line indicates the length <em>v</em> = 1.37 of the mean
vector</p>
</blockquote>
<hr>
<blockquote>
<h3>4 Discussion</h3>
<p>Most applications of distributed representations of
words obtained through word2vec so far centered
around semantics. A host of experiments have
demonstrated the extent to which the direction of
word vectors captures semantics. In this brief report,
it was pointed out that not only the direction,
but also the length of word vectors carries important
information. Specifically, it was shown that
word vector length furnishes, in combination with
term frequency, a useful measure of word significance. </p>
</blockquote> | 2018-05-27 08:10:16.250000+00:00 | 2018-05-27 11:38:54.553000+00:00 | 2018-05-27 11:38:54.553000+00:00 | null | 36,034,454 | <p>I am using Word2vec through <a href="https://radimrehurek.com/gensim/" rel="noreferrer"><em>gensim</em></a> with Google's pretrained vectors trained on Google News. I have noticed that the word vectors I can access by doing direct index lookups on the <code>Word2Vec</code> object are not unit vectors:</p>
<pre><code>>>> import numpy
>>> from gensim.models import Word2Vec
>>> w2v = Word2Vec.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True)
>>> king_vector = w2v['king']
>>> numpy.linalg.norm(king_vector)
2.9022589
</code></pre>
<p>However, in the <a href="https://github.com/piskvorky/gensim/blob/0.12.4/gensim/models/word2vec.py#L1153-L1213" rel="noreferrer"><code>most_similar</code></a> method, these non-unit vectors are not used; instead, normalised versions are used from the undocumented <code>.syn0norm</code> property, which contains only unit vectors:</p>
<pre><code>>>> w2v.init_sims()
>>> unit_king_vector = w2v.syn0norm[w2v.vocab['king'].index]
>>> numpy.linalg.norm(unit_king_vector)
0.99999994
</code></pre>
<p>The larger vector is just a scaled up version of the unit vector:</p>
<pre><code>>>> king_vector - numpy.linalg.norm(king_vector) * unit_king_vector
array([ 0.00000000e+00, -1.86264515e-09, 0.00000000e+00,
0.00000000e+00, -1.86264515e-09, 0.00000000e+00,
-7.45058060e-09, 0.00000000e+00, 3.72529030e-09,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
... (some lines omitted) ...
-1.86264515e-09, -3.72529030e-09, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00,
0.00000000e+00, 0.00000000e+00, 0.00000000e+00], dtype=float32)
</code></pre>
<p>Given that word similarity comparisons in Word2Vec are done by <a href="https://en.wikipedia.org/wiki/Cosine_similarity" rel="noreferrer">cosine similarity</a>, it's not obvious to me what the lengths of the non-normalised vectors mean - although I assume they mean <em>something</em>, since gensim exposes them to me rather than only exposing the unit vectors in <code>.syn0norm</code>.</p>
<p>How are the lengths of these non-normalised Word2vec vectors generated, and what is their meaning? For what calculations does it make sense to use the normalised vectors, and when should I use the non-normalised ones?</p> | 2016-03-16 11:31:27.083000+00:00 | 2018-05-27 11:38:54.553000+00:00 | null | python|nlp|gensim|word2vec | ['https://arxiv.org/pdf/1508.02297.pdf', 'https://i.stack.imgur.com/NI9je.png'] | 2 |
71,574,306 | <p>As mentioned in other answers, BERT was not meant to produce sentence level embeddings. Now, let's work on the how we can leverage power of BERT for computing context-sensitive sentence level embeddings.</p>
<p>BERT does carry the context at word level, here is an example:</p>
<p>This is a wooden <strong>stick</strong>.
<strong>Stick</strong> to your work.</p>
<p>Above two sentences carry the word 'stick', BERT does a good job in computing embeddings of stick as per sentence(or say, context).</p>
<p>Now, let's move to one another example:</p>
<p>--What is your age?</p>
<p>--How old are you?</p>
<p>Above two sentences are contextually very similar, so, we need a model that can accept a sentence or text chunk or paragraph and produce right embeddings collectively. Here is how it can be achieved.</p>
<p>Method 1:</p>
<p>Use pre-trained sentence_transformers, here is <a href="https://huggingface.co/models?pipeline_tag=sentence-similarity&sort=downloads" rel="nofollow noreferrer">link</a> to huggingface hub.</p>
<pre><code>from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(r"sentence-transformers/paraphrase-MiniLM-L6-v2")
embd_a = model.encode("What is your age?")
embd_b = model.encode("How old are you?")
sim_score = cos_sim(embd_a, embd_b)
print(sim_score)
output: tensor([[0.8648]])
</code></pre>
<p>Now, there may be a question on how can we train our on sentence_transformer, specific to a domain. Here we go,</p>
<ol>
<li>Supervised approach:</li>
</ol>
<p>A common challenge for Data Scientist or MLEngineers is to get rightly annotated data, mostly it is hard to get it in good volume, but say, if you have it here is how we can train our on sentence_transformer (don't worry, there is an unsupervised approach too).</p>
<pre><code>model = SentenceTransformer('distilbert-base-nli-mean-tokens')
train_examples = [InputExample(texts=['My first sentence', 'My second sentence'], label=0.8),
InputExample(texts=['Another pair', 'Unrelated sentence'], label=0.3)]
train_dataloader = DataLoader(train_examples, shuffle=True, batch_size=16)
train_dataloader = DataLoader(train_examples, shuffle=True, batch_size=16)
train_loss = losses.CosineSimilarityLoss(model)
#Tune the model
model.fit(train_objectives=[(train_dataloader, train_loss)], epochs=1, warmup_steps=100)
</code></pre>
<p>More details <a href="https://www.sbert.net/docs/training/overview.html" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>Tip: If you have a set of sentences that are similar to each other, say, you have a CSV, where column A and B contains sentences similar to each other(I mean each row will have a pair of sentences which are similar to each other), just load the csv and assign random values between 0.85 to 0.95 as similarity score and proceed.</p>
</blockquote>
<ol start="2">
<li>Unsupervised approach</li>
</ol>
<p>Say you don't have a huge set of annotated data, but you want to train a domain specific sentence_transformer, here is how we do it. Even for unsupervised training, data will be required, i.e. list of sentences/paragraphs, but need not to be annotated. Say, you don't have any data at all, still there is a work round (please visit last part of the answer).</p>
<p>Multiple approaches are available for unsupervised training, listing two of the most prominent ones. To see list of all available approaches, please visit <a href="https://www.sbert.net/examples/unsupervised_learning/README.html" rel="nofollow noreferrer">here</a>.</p>
<p><strong>TSDAE</strong> <a href="https://arxiv.org/pdf/2104.06979.pdf" rel="nofollow noreferrer">link</a> to research paper.</p>
<pre><code>from sentence_transformers import SentenceTransformer, LoggingHandler
from sentence_transformers import models, util, datasets, evaluation, losses
from torch.utils.data import DataLoader
# Define your sentence transformer model using CLS pooling
model_name = 'bert-base-uncased'
word_embedding_model = models.Transformer(model_name)
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(), 'cls')
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
# Define a list with sentences (1k - 100k sentences)
train_sentences = ["Your set of sentences",
"Model will automatically add the noise",
"And re-construct it",
"You should provide at least 1k sentences"]
# Create the special denoising dataset that adds noise on-the-fly
train_dataset = datasets.DenoisingAutoEncoderDataset(train_sentences)
# DataLoader to batch your data
train_dataloader = DataLoader(train_dataset, batch_size=8, shuffle=True)
# Use the denoising auto-encoder loss
train_loss = losses.DenoisingAutoEncoderLoss(model, decoder_name_or_path=model_name, tie_encoder_decoder=True)
# Call the fit method
model.fit(
train_objectives=[(train_dataloader, train_loss)],
epochs=1,
weight_decay=0,
scheduler='constantlr',
optimizer_params={'lr': 3e-5},
show_progress_bar=True
)
model.save('output/tsdae-model')
</code></pre>
<p><strong>SimCSE</strong> <a href="https://arxiv.org/pdf/2104.08821.pdf" rel="nofollow noreferrer">link</a> to research paper</p>
<pre><code>from sentence_transformers import SentenceTransformer, InputExample
from sentence_transformers import models, losses
from torch.utils.data import DataLoader
# Define your sentence transformer model using CLS pooling
model_name = 'distilroberta-base'
word_embedding_model = models.Transformer(model_name, max_seq_length=32)
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension())
model = SentenceTransformer(modules=[word_embedding_model, pooling_model])
# Define a list with sentences (1k - 100k sentences)
train_sentences = ["Your set of sentences",
"Model will automatically add the noise",
"And re-construct it",
"You should provide at least 1k sentences"]
# Convert train sentences to sentence pairs
train_data = [InputExample(texts=[s, s]) for s in train_sentences]
# DataLoader to batch your data
train_dataloader = DataLoader(train_data, batch_size=128, shuffle=True)
# Use the denoising auto-encoder loss
train_loss = losses.MultipleNegativesRankingLoss(model)
# Call the fit method
model.fit(
train_objectives=[(train_dataloader, train_loss)],
epochs=1,
show_progress_bar=True
)
model.save('output/simcse-model')
</code></pre>
<blockquote>
<p>Tip: If you carefully observer, major difference is in the loss function used. To see a list of all the loss function applicable to such training scenarios, visit <a href="https://www.sbert.net/docs/package_reference/losses.html" rel="nofollow noreferrer">here</a>. Also, with all the experiments I did, I found that TSDAE is more useful, when you want decent precision and good recall. However, SimCSE can be used when you want very high precision and low recall.</p>
</blockquote>
<p>Now, if you don't have sufficient data to fine tune the model, but you find a BERT model trained on your domain, you can directly leverage that by adding pooling and dense layers. Please do research on what is 'pooling', to have better understanding on what you are doing.</p>
<pre><code>from sentence_transformers import SentenceTransformer, models
from torch import nn
word_embedding_model = models.Transformer('bert-base-uncased', max_seq_length=256)
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension())
dense_model = models.Dense(in_features=pooling_model.get_sentence_embedding_dimension(), out_features=256, activation_function=nn.Tanh())
model = SentenceTransformer(modules=[word_embedding_model, pooling_model, dense_model])
</code></pre>
<blockquote>
<p>Tip: With above approach, if you start getting extreme high cosine score, it is an alarm to do negative testing. Sometime, simply adding pooling layers may not help, you must take few examples and check similarity scores for the inputs that are not similar (it is possible that even for dissimilar sentences, this may show good similarity, and that is the time you should stop and try to collect some data and do unsupervised training)</p>
</blockquote>
<p>People who are interested in going deeper, here is a list of topics that may help you.</p>
<ol>
<li>Pooling</li>
<li>Siamese Networks</li>
<li>Contrastive Loss</li>
</ol>
<p>:) :)</p> | 2022-03-22 14:59:27.140000+00:00 | 2022-03-22 14:59:27.140000+00:00 | null | null | 63,461,262 | <p>I'm trying to get sentence vectors from hidden states in a BERT model. Looking at the huggingface BertModel instructions <a href="https://huggingface.co/bert-base-multilingual-cased?text=This%20sentence%20etc" rel="noreferrer">here</a>, which say:</p>
<pre><code>from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
model = BertModel.from_pretrained("bert-base-multilingual-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
</code></pre>
<p>So first note, as it is on the website, this does /not/ run. You get:</p>
<pre><code>>>> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'BertTokenizer' object is not callable
</code></pre>
<p>But it looks like a minor change fixes it, in that you don't call the tokenizer directly, but ask it to encode the input:</p>
<pre><code>encoded_input = tokenizer.encode(text, return_tensors="pt")
output = model(encoded_input)
</code></pre>
<p>OK, that aside, the tensors I get, however, have a different shape than I expected:</p>
<pre><code>>>> output[0].shape
torch.Size([1,11,768])
</code></pre>
<p>This is a lot of layers. Which is the correct layer to use for sentence embeddings? <code>[0]</code>? <code>[-1]</code>? Averaging several? I have the goal of being able to do cosine similarity with these, so I need a proper 1xN vector rather than an NxK tensor.</p>
<p>I see that the popular <a href="https://github.com/hanxiao/bert-as-service#building-a-qa-semantic-search-engine-in-3-minutes" rel="noreferrer">bert-as-a-service project</a> appears to use <code>[0]</code></p>
<p>Is this correct? Is there documentation for what each of the layers are?</p> | 2020-08-18 03:00:39.800000+00:00 | 2022-03-22 14:59:27.140000+00:00 | null | bert-language-model|huggingface-transformers | ['https://huggingface.co/models?pipeline_tag=sentence-similarity&sort=downloads', 'https://www.sbert.net/docs/training/overview.html', 'https://www.sbert.net/examples/unsupervised_learning/README.html', 'https://arxiv.org/pdf/2104.06979.pdf', 'https://arxiv.org/pdf/2104.08821.pdf', 'https://www.sbert.net/docs/package_reference/losses.html'] | 6 |
63,464,865 | <p>I don't think there is single authoritative documentation saying what to use and when. You need to experiment and measure what is best for your task. Recent observations about BERT are nicely summarized in this paper: <a href="https://arxiv.org/pdf/2002.12327.pdf" rel="noreferrer">https://arxiv.org/pdf/2002.12327.pdf</a>.</p>
<p>I think the rule of thumb is:</p>
<ul>
<li><p>Use the last layer if you are going to fine-tune the model for your specific task. And finetune whenever you can, several hundred or even dozens of training examples are enough.</p>
</li>
<li><p>Use some of the middle layers (7-th or 8-th) if you cannot finetune the model. The intuition behind that is that the layers first develop a more and more abstract and general representation of the input. At some point, the representation starts to be more target to the pre-training task.</p>
</li>
</ul>
<p>Bert-as-services uses the last layer by default (but it is configurable). Here, it would be <code>[:, -1]</code>. However, it always returns a list of vectors for all input tokens. The vector corresponding to the first special (so-called <code>[CLS]</code>) token is considered to be the sentence embedding. This where the <code>[0]</code> comes from in the snipper you refer to.</p> | 2020-08-18 08:37:37.320000+00:00 | 2020-08-18 16:31:40.297000+00:00 | 2020-08-18 16:31:40.297000+00:00 | null | 63,461,262 | <p>I'm trying to get sentence vectors from hidden states in a BERT model. Looking at the huggingface BertModel instructions <a href="https://huggingface.co/bert-base-multilingual-cased?text=This%20sentence%20etc" rel="noreferrer">here</a>, which say:</p>
<pre><code>from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
model = BertModel.from_pretrained("bert-base-multilingual-cased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
</code></pre>
<p>So first note, as it is on the website, this does /not/ run. You get:</p>
<pre><code>>>> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'BertTokenizer' object is not callable
</code></pre>
<p>But it looks like a minor change fixes it, in that you don't call the tokenizer directly, but ask it to encode the input:</p>
<pre><code>encoded_input = tokenizer.encode(text, return_tensors="pt")
output = model(encoded_input)
</code></pre>
<p>OK, that aside, the tensors I get, however, have a different shape than I expected:</p>
<pre><code>>>> output[0].shape
torch.Size([1,11,768])
</code></pre>
<p>This is a lot of layers. Which is the correct layer to use for sentence embeddings? <code>[0]</code>? <code>[-1]</code>? Averaging several? I have the goal of being able to do cosine similarity with these, so I need a proper 1xN vector rather than an NxK tensor.</p>
<p>I see that the popular <a href="https://github.com/hanxiao/bert-as-service#building-a-qa-semantic-search-engine-in-3-minutes" rel="noreferrer">bert-as-a-service project</a> appears to use <code>[0]</code></p>
<p>Is this correct? Is there documentation for what each of the layers are?</p> | 2020-08-18 03:00:39.800000+00:00 | 2022-03-22 14:59:27.140000+00:00 | null | bert-language-model|huggingface-transformers | ['https://arxiv.org/pdf/2002.12327.pdf'] | 1 |
54,212,740 | <p>Before going into the solution I would first comment on the proposed solution of the questions. The first solution would work better compared to the second. This is because It is very hard to interpret the (probability )values of the neural network output. Closeness of the values might be caused by similarity of the classes involving(in this case a dog might look like a cat). Sometimes you may end up getting unseen classes being assigned to one of the classes with high probability.</p>
<p>Most of supervised classification machine learning algorithms are designed to map an input to one of some fixed number of classes. This type of classification is called <strong>closed world classification</strong>.<br />
E.g.</p>
<ul>
<li><strong>MNIST</strong> - handwritten digit classification</li>
<li><strong>Cat - Dog</strong> classification</li>
</ul>
<p>When classification involves some unlabeled/unknown classes, the approach is called Open-world classification. There are various papers published[<a href="https://www.cs.uic.edu/%7Eliub/publications/emnlp17-camera-ready.pdf" rel="nofollow noreferrer">1</a>, <a href="https://www.kdd.org/kdd2016/papers/files/rpp0426-feiA.pdf" rel="nofollow noreferrer">2</a>, <a href="https://arxiv.org/pdf/1801.05609.pdf" rel="nofollow noreferrer">3</a>].</p>
<p>I will explain my solution using the solution proposed by <a href="https://arxiv.org/pdf/1801.05609.pdf" rel="nofollow noreferrer">3</a>.
There are two options to apply the Open world classification(Here on I will refer to OWC) to the problem in question.</p>
<ol>
<li>Classifying all new classes as single class</li>
<li>Classifying all new classes as single class, then further grouping similar samples into single class and different samples into different classes.</li>
</ol>
<h3>1. Classifying all new classes as single class</h3>
<p>Although there could be many types of model that could fit to this type of classification(One of could be the first solution proposed by the question.) I would discusses model of <a href="https://arxiv.org/pdf/1801.05609.pdf" rel="nofollow noreferrer">3</a>. Here the network first decides to classify or to reject the input. Ideally if the sample is from seen classes then the network will classify into one of seen classes. Other wise the network rejects. The authors of <a href="https://arxiv.org/pdf/1801.05609.pdf" rel="nofollow noreferrer">3</a> called this network Open classification network(OCN). Keras implementation of OCN could be(I've simplified the network to just focus on output of the model.</p>
<pre><code>inputs = keras.layers.Input(shape=(28, 28,1))
x = keras.layers.Conv2D(64, 3, activation="relu")(inputs)
x = keras.layers.Flatten()(x)
embedding = keras.layers.Dense(256, activation="linear", name="embedding_layer")(x)
reject_output = keras.layers.Dense(1, activaton="sigmoid", name="reject_layer")(embedding)
classification_output = keras.layers.Dense(num_of_classes, activaton="softmax", name="reject_layer")(embedding)
ocn_model = keras.models.Model(inputs=inputs, outputs=[reject_output, classification_output)
</code></pre>
<p>The model is trained in a way that jointly optimizes both <code>reject_output</code> and <code>classification_output</code> losses.</p>
<h3>2. Classifying all new classes as single class, then further grouping similar</h3>
<p>The authors of <a href="https://arxiv.org/pdf/1801.05609.pdf" rel="nofollow noreferrer">3</a> used another network to find similarity between samples. They called the network Pairwise Classification Network(PCN). PCN classifies whether two inputs are from the same classes or different classes. We can use the <code>embedding</code> of the first solution and use pairwise similarity metrics to create PCN network. In PCN the weights are shared for both inputs. This could be implemented using keras</p>
<pre><code>embedding_model = keras.layers.Sequential([
keras.layers.Conv2D(64, 3, activation="relu", input_shape=(28, 28,1))
keras.layers.Flatten(),
embedding = keras.layers.Dense(256, activation="linear", name="embedding_layer")
])
input1 = keras.layers.Input(shape=(28, 28, 1))
input2 = keras.layers.Input(shape=(28, 28, 1))
embedding1 = embedding_model(input1)
embedding2 = embedding_model(input2)
merged = keras.layers.Concatenate()([embedding1, embedding2])
output = keras.layers.Dense(1, activation="sigmoid")(merged)
pcn_model = keras.models.Model(inputs=[input1, input2], outputs=output)
</code></pre>
<p>PCN model will be trained to reduce the distance from the same and increase the distance between different classes.</p>
<p>After the PCN network is trained auto-encoder is trained to learn useful representations from the unseen classes. Then Clustering algorithm is used to group(cluster) unseen classes by using PCN model as distance function.</p> | 2019-01-16 08:13:28.980000+00:00 | 2020-08-05 16:36:39.033000+00:00 | 2020-08-05 16:36:39.033000+00:00 | null | 54,210,943 | <p>I have a basic question. Supposedly I am training an image classifier for cats and dogs. But I need an extra functionality. If an image does not belong to any of the category, how do I get to know it. Some of the options I was thinking of were:</p>
<ol>
<li>Instead of 2 neurons I add a 3rd Neuron to the last layer. And get my training labels y as a one hot encoding of 3 labels, 3rd for being not in either of cat or dog class. I will use some random examples for my 3rd class.</li>
<li>I will use only 2 neurons and using some probability threshold I will use it to tell which class should my image belong. </li>
</ol>
<p>However I do not think any of the methods is viable.</p>
<p>Can anyone suggest I a good technique to classify images which do not belong to my training category?</p> | 2019-01-16 05:31:30.610000+00:00 | 2020-08-05 16:36:39.033000+00:00 | 2019-01-16 05:37:19.950000+00:00 | machine-learning|keras|neural-network|deep-learning|classification | ['https://www.cs.uic.edu/%7Eliub/publications/emnlp17-camera-ready.pdf', 'https://www.kdd.org/kdd2016/papers/files/rpp0426-feiA.pdf', 'https://arxiv.org/pdf/1801.05609.pdf', 'https://arxiv.org/pdf/1801.05609.pdf', 'https://arxiv.org/pdf/1801.05609.pdf', 'https://arxiv.org/pdf/1801.05609.pdf', 'https://arxiv.org/pdf/1801.05609.pdf'] | 7 |
51,201,035 | <p><strong>TL;DR</strong></p>
<p>The opposite is actually the case. Higher precision calculations are less desired by frameworks like TensorFlow. This is due to slower training and larger models (more ram and disc space).</p>
<p><strong>The long version</strong></p>
<p>Neural networks actually benefit from using lower precision representations. <a href="https://arxiv.org/pdf/1502.02551.pdf" rel="nofollow noreferrer">This paper</a> is a good introduction to the topic.</p>
<blockquote>
<p>The key finding of our exploration is that deep neural networks can
be trained using low-precision fixed-point arithmetic, provided
that the stochastic rounding scheme is applied while operating on
fixed-point numbers.</p>
</blockquote>
<p>They use 16 bit fixed point number rather than the much higher precession 32 bit floating point number (more information on their difference <a href="https://stackoverflow.com/questions/7524838/fixed-point-vs-floating-point-number">here</a>).</p>
<p>The following image was taken from that paper. It shows the test error for different rounding schemes as well as the number of bits dedicated to the integer part of the fixed point representation. As you can see the solid red and blue lines (16 bit fixed) have a very similar error to the black line (32 bit float).</p>
<p><a href="https://i.stack.imgur.com/RMGoS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RMGoS.png" alt="enter image description here"></a></p>
<p>The main benefit/driver for going to a lower precision is computational cost and storage of weights. So the higher precision hardware would not give enough of an accuracy increase to out way the cost of slower computation.</p>
<p>Studies like this I believe are a large driver behind the specs for neural network specific processing hardware, such as <a href="https://en.wikipedia.org/wiki/Tensor_processing_unit" rel="nofollow noreferrer">Google's new TPU</a>. Even though most GPUs don't support 16 bit floats yet Google is <a href="https://github.com/tensorflow/tensorflow/issues/1300" rel="nofollow noreferrer">working to support it</a>.</p> | 2018-07-05 23:13:58.080000+00:00 | 2018-07-05 23:31:54.140000+00:00 | 2018-07-05 23:31:54.140000+00:00 | null | 51,033,775 | <p>Hi I was reading the <a href="https://www.tensorflow.org/programmers_guide/using_gpu" rel="noreferrer">using GPUs page</a> at tensor flow and I was wondering if gpu precision performance was ever a factor in tensor flow. For example given a machine with two cards,</p>
<blockquote>
<p>gaming gpu</p>
</blockquote>
<p>+</p>
<blockquote>
<p>workstation gpu</p>
</blockquote>
<p>is there any implementation that would provide the workstation card's higher precision performance could overcome the slower clock speed?</p>
<p>I'm not sure if these situations would exist in the context of gradient decent or network performance after training or elsewhere entirely but I would love to get some more information on the topic!</p>
<p>Thanks in advance.</p> | 2018-06-26 01:37:18.280000+00:00 | 2018-07-05 23:31:54.140000+00:00 | 2018-07-05 20:04:17.873000+00:00 | python|tensorflow|hardware | ['https://arxiv.org/pdf/1502.02551.pdf', 'https://stackoverflow.com/questions/7524838/fixed-point-vs-floating-point-number', 'https://i.stack.imgur.com/RMGoS.png', 'https://en.wikipedia.org/wiki/Tensor_processing_unit', 'https://github.com/tensorflow/tensorflow/issues/1300'] | 5 |
38,424,944 | <p>Those two distance metrics are probably strongly correlated so it might not matter all that much which one you use. As you point out, cosine distance means we don't have to worry about the length of the vectors at all.</p>
<p>This paper indicates that there is a relationship between the frequency of the word and the length of the word2vec vector. <a href="http://arxiv.org/pdf/1508.02297v1.pdf" rel="noreferrer">http://arxiv.org/pdf/1508.02297v1.pdf</a></p> | 2016-07-17 19:07:52.547000+00:00 | 2016-07-17 19:07:52.547000+00:00 | null | null | 38,423,387 | <p>I have been reading the papers on Word2Vec (e.g. <a href="https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf" rel="noreferrer">this one</a>), and I think I understand training the vectors to maximize the probability of other words found in the same contexts.</p>
<p>However, I do not understand why cosine is the correct measure of word similarity. Cosine similarity says that two vectors point in the same direction, but they could have different magnitudes.</p>
<p>For example, cosine similarity makes sense comparing bag-of-words for documents. Two documents might be of different length, but have similar distributions of words.</p>
<p>Why not, say, Euclidean distance?</p>
<p>Can anyone one explain why cosine similarity works for word2Vec?</p> | 2016-07-17 16:25:09.487000+00:00 | 2019-09-17 05:40:03.100000+00:00 | 2017-11-30 14:49:57.400000+00:00 | nlp|deep-learning|word2vec | ['http://arxiv.org/pdf/1508.02297v1.pdf'] | 1 |
Subsets and Splits