a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
45,600,510 | <p>After reading more about robust linear regression, I think I better understand the source of the problem. As outlined in <a href="https://arxiv.org/pdf/1208.5595.pdf" rel="nofollow noreferrer">this paper</a> and alluded to in the <a href="https://rdrr.io/rforge/robustbase/man/lmrob.control.html" rel="nofollow noreferrer">docs for lmrob.control</a>, the first step of a robust regression involves sub sampling the input data. In cases with many categorical predictors, there is a higher likelihood that the sub-sample will contain co-linear columns which results in a matrix that is not full rank and hence the reported "DGELS" error. The "KS2011" and "KS2014" settings in lmrob allow you to specify that the algorithm should take extra care to avoid co-linear columns when picking a sub-sample, however in cases where the number of data points is not much bigger than the number of variables in the model (as is sometimes the case for my application), the algorithm still cannot find a non-singular data subset from the initial starting point and it still fails. This doesn't explain why restarting a new R session can help lmrob find a non-singular subset, but it does explain why this is a difficult problem that often throws errors.</p> | 2017-08-09 20:55:46.533000+00:00 | 2017-08-09 20:55:46.533000+00:00 | null | null | 43,037,955 | <p>In R, I'm using lmrob from the robustbase package to fit a simple linear model of the form:</p>
<pre><code>lmrob(value ~ t + as.factor(r) + as.factor(c) + 0, data=subs, setting="KS2014")
</code></pre>
<p>This works fine 95% of the time, but every once in a while the call fails and gives this error:</p>
<blockquote>
<p>Error: DGELS: weighted design matrix not of full rank (column XX).</p>
</blockquote>
<p>where XX is varying column number. I can fix this by simply executing the lmrob command repeatedly until it finally succeeds -- usually this take 1-2 tries until it works. Note that I am not changing any of the inputs when I rerun lmrob.</p>
<p>Does anyone know of a setting I can change to avoid having to manually re-run the lmrob command to get it to work? I've tried changing some of the control parameters without success:</p>
<pre><code>lm_control <- lmrob.control(setting="KS2014")
lm_control$max.it <- 1000
lm_control$nResample <- 1500
</code></pre> | 2017-03-27 04:27:55.853000+00:00 | 2017-08-09 20:55:46.533000+00:00 | null | r|lm|singular|robust | ['https://arxiv.org/pdf/1208.5595.pdf', 'https://rdrr.io/rforge/robustbase/man/lmrob.control.html'] | 2 |
45,003,486 | <h3>CLOSED FORM (TIKHONOV) VERSUS GRADIENT DESCENT</h3>
<p>Hi! nice explanations for the intuitive and top-notch mathematical approaches there. I just wanted to add some specificities that, where not "problem-solving", may definitely help to speed up and give some consistency to the process of finding a good regularization hyperparameter.</p>
<p>I assume that you are talking about the <strong>L2</strong> (a.k. "weight decay") regularization, linearly weighted by the <em>lambda</em> term, and that you are optimizing the weights of your model either with the <strong>closed-form <a href="https://en.wikipedia.org/wiki/Tikhonov_regularization" rel="nofollow noreferrer">Tikhonov</a> equation</strong> (highly recommended for low-dimensional linear regression models), or with some variant of <strong>gradient descent with backpropagation</strong>. And that in this context, you want to choose the value for <em>lambda</em> that provides best generalization ability.</p>
<hr />
<h3>CLOSED FORM (TIKHONOV)</h3>
<p>If you are able to go the Tikhonov way with your model (<a href="https://www.youtube.com/watch?v=NN7mBupK-8o#t=12m45s" rel="nofollow noreferrer">Andrew Ng</a> says under 10k dimensions, but this suggestion is at least 5 years old) <a href="https://en.wikipedia.org/wiki/Tikhonov_regularization#Determination_of_the_Tikhonov_factor" rel="nofollow noreferrer">Wikipedia - determination of the Tikhonov factor</a> offers an interesting <strong>closed-form solution, which has been proven to provide the optimal value</strong>. But this solution probably raises some kind of implementation issues (time complexity/numerical stability) I'm not aware of, because there is no mainstream algorithm to perform it. This <a href="https://arxiv.org/abs/1610.01952" rel="nofollow noreferrer">2016 paper</a> looks very promising though and may be worth a try if you really have to optimize your linear model to its best.</p>
<ul>
<li>For a quicker prototype implementation, this <a href="https://pypi.python.org/pypi/InverseProblem/1.0" rel="nofollow noreferrer">2015</a> Python package seems to deal with it iteratively, you could let it optimize and then extract the final value for the lambda:</li>
</ul>
<blockquote>
<p>In this new innovative method, we have derived an iterative approach to solving the general Tikhonov regularization problem, which converges to the noiseless solution, does not depend strongly on the choice of lambda, and yet still avoids the inversion problem.</p>
</blockquote>
<p>And from the <a href="https://github.com/kathrynthegreat/InverseProblem" rel="nofollow noreferrer">GitHub README</a> of the project:
<code>InverseProblem.invert(A, be, k, l) #this will invert your A matrix, where be is noisy be, k is the no. of iterations, and lambda is your dampening effect (best set to 1)</code></p>
<hr />
<h3>GRADIENT DESCENT</h3>
<p><em>All links of this part are from Michael Nielsen's amazing online book "Neural Networks and Deep Learning", recommended reading!</em></p>
<p>For this approach it seems to be even less to be said: the cost function is usually non-convex, the optimization is performed numerically and the performance of the model is measured by some form of cross validation (see <a href="http://neuralnetworksanddeeplearning.com/chap3.html#overfitting_and_regularization" rel="nofollow noreferrer">Overfitting and Regularization</a> and <a href="http://neuralnetworksanddeeplearning.com/chap3.html#why_does_regularization_help_reduce_overfitting" rel="nofollow noreferrer">why does regularization help reduce overfitting</a> if you haven't had enough of that). But even when cross-validating, Nielsen suggests something: you may want to take a look at <a href="http://neuralnetworksanddeeplearning.com/chap3.html#regularization" rel="nofollow noreferrer">this detailed explanation</a> on how does the L2 regularization provide a weight decaying effect, but the summary is that it is <strong>inversely proportional to the number of samples <code>n</code></strong>, so when calculating the gradient descent equation with the L2 term,</p>
<blockquote>
<p>just use backpropagation, as usual, and then add <code>(λ/n)*w</code> to the partial derivative of all the weight terms.</p>
</blockquote>
<p>And his conclusion is that, when wanting a similar regularization effect with a different number of samples, lambda has to be changed proportionally:</p>
<blockquote>
<p>we need to modify the regularization parameter. The reason is because the size <code>n</code> of the training set has changed from <code>n=1000</code> to <code>n=50000</code>, and this changes the weight decay factor <code>1−learning_rate*(λ/n)</code>. If we continued to use <code>λ=0.1</code> that would mean much less weight decay, and thus much less of a regularization effect. We compensate by changing to <code>λ=5.0</code>.</p>
</blockquote>
<p>This is only useful when applying the same model to different amounts of the same data, but I think it opens up the door for some intuition on how it should work, and, more importantly, speed up the hyperparametrization process by allowing you to finetune lambda in smaller subsets and then scale up.</p>
<p>For choosing the exact values, he suggests in his conclusions on <a href="http://neuralnetworksanddeeplearning.com/chap3.html#how_to_choose_a_neural_network%27s_hyper-parameters" rel="nofollow noreferrer">how to choose a neural network's hyperparameters</a> the purely empirical approach: start with 1 and then progressively multiply&divide by 10 until you find the proper order of magnitude, and then do a local search within that region. In the comments of <a href="https://scicomp.stackexchange.com/a/10673">this SE related question</a>, the user Brian Borchers suggests also a very well known method that may be useful for that local search:</p>
<ol>
<li>Take small subsets of the training and validation sets (to be able to make many of them in a reasonable amount of time)</li>
<li>Starting with <code>λ=0</code> and increasing by small amounts within some region, perform a quick training&validation of the model and plot both loss functions</li>
<li>You will observe three things:</li>
<li>The CV loss function will be consistently higher than the training one, since your model is optimized for the training data exclusively (<em>EDIT: After some time I've seen a MNIST case where adding L2 helped the CV loss decrease faster than the training one until convergence. Probably due to the ridiculous consistency of the data and a suboptimal hyperparametrization though</em>).</li>
<li>The training loss function will have its minimum for <code>λ=0</code>, and then increase with the regularization, since preventing the model from optimally fitting the training data is exactly what regularization does.</li>
<li>The CV loss function will start high at <code>λ=0</code>, then decrease, and then start increasing again at some point (<em>EDIT: this assuming that the setup is able to overfit for <code>λ=0</code>, i.e. the model has enough power and no other regularization means are heavily applied</em>).</li>
<li>The optimal value for <code>λ</code> will be probably somewhere around the minimum of the CV loss function, it also may depend a little on how does the training loss function look like. See the picture for a possible (but not the only one) representation of this: instead of "model complexity" you should interpret the x axis <strong>as <code>λ</code> being zero at the right and increasing towards the left</strong>.</li>
</ol>
<p><a href="https://i.stack.imgur.com/ZTQSP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZTQSP.png" alt="L2 diagnostics: instead of "model complexity" one should interpret the x axis **as λ being zero at the right and increasing towards the left" /></a></p>
<p>Hope this helps! Cheers,<br />
Andres</p> | 2017-07-10 03:43:34.443000+00:00 | 2021-05-15 09:23:10.207000+00:00 | 2021-05-15 09:23:10.207000+00:00 | null | 12,182,063 | <p>When we have a high degree linear polynomial that is used to fit a set of points in a linear regression setup, to prevent overfitting, we use regularization, and we include a lambda parameter in the cost function. This lambda is then used to update the theta parameters in the gradient descent algorithm.</p>
<p>My question is how do we calculate this lambda regularization parameter?</p> | 2012-08-29 16:04:04.237000+00:00 | 2021-05-15 09:23:10.207000+00:00 | null | machine-learning|data-mining|regression | ['https://en.wikipedia.org/wiki/Tikhonov_regularization', 'https://www.youtube.com/watch?v=NN7mBupK-8o#t=12m45s', 'https://en.wikipedia.org/wiki/Tikhonov_regularization#Determination_of_the_Tikhonov_factor', 'https://arxiv.org/abs/1610.01952', 'https://pypi.python.org/pypi/InverseProblem/1.0', 'https://github.com/kathrynthegreat/InverseProblem', 'http://neuralnetworksanddeeplearning.com/chap3.html#overfitting_and_regularization', 'http://neuralnetworksanddeeplearning.com/chap3.html#why_does_regularization_help_reduce_overfitting', 'http://neuralnetworksanddeeplearning.com/chap3.html#regularization', 'http://neuralnetworksanddeeplearning.com/chap3.html#how_to_choose_a_neural_network%27s_hyper-parameters', 'https://scicomp.stackexchange.com/a/10673', 'https://i.stack.imgur.com/ZTQSP.png'] | 12 |
46,908,399 | <p>I've also encountered this issue.
See <a href="https://arxiv.org/abs/1706.02677" rel="nofollow noreferrer">Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour</a> from Facebook which addresses the same issue. The suggested solution is simply to scale up the learning rate by k (after some reasonable warm-up epochs) for k GPUs.</p>
<p>In practice I've found out that simply summing up the gradients from the GPUs (rather than averaging them) and using the original learning rate sometimes does the job as well.</p> | 2017-10-24 10:41:55.100000+00:00 | 2020-01-12 13:40:40.213000+00:00 | 2020-01-12 13:40:40.213000+00:00 | null | 43,845,644 | <p>When I execute the cifar10 model as described at <a href="https://www.tensorflow.org/tutorials/deep_cnn" rel="noreferrer">https://www.tensorflow.org/tutorials/deep_cnn</a> I achieve 86% accuracy after approx 4 hours using a single GPU , when I utilize 2 GPU's the accuracy drops to 84% but reaching 84% accuracy is faster on 2 GPU's than 1. </p>
<p>My intuition is
that average_gradients function as defined at <a href="https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py" rel="noreferrer">https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py</a> returns a less accurate gradient value as an average of gradients will be less accurate than the actual gradient value. </p>
<p>If the gradients are less accurate then the parameters than control the function that is learned as part of training is less accurate. Looking at the code (<a href="https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py" rel="noreferrer">https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py</a>) why is averaging the gradients over multiple GPU's less accurate than computing the gradient on a single GPU ?</p>
<p>Is my intuition of averaging the gradients producing a less accurate value correct ?</p>
<p>Randomness in the model is described as : </p>
<pre><code>The images are processed as follows:
They are cropped to 24 x 24 pixels, centrally for evaluation or randomly for training.
They are approximately whitened to make the model insensitive to dynamic range.
For training, we additionally apply a series of random distortions to artificially increase the data set size:
Randomly flip the image from left to right.
Randomly distort the image brightness.
Randomly distort the image contrast.
</code></pre>
<p>src : <a href="https://www.tensorflow.org/tutorials/deep_cnn" rel="noreferrer">https://www.tensorflow.org/tutorials/deep_cnn</a></p>
<p>Does this have an effect on training accuracy ?</p>
<p>Update : </p>
<p>Attempting to investigate this further, the loss function value training with different number of GPU's.</p>
<pre><code>Training with 1 GPU : loss value : .7 , Accuracy : 86%
Training with 2 GPU's : loss value : .5 , Accuracy : 84%
</code></pre>
<p>Shouldn't the loss value be lower for higher for higher accuracy, not vice versa ?</p> | 2017-05-08 10:45:44.257000+00:00 | 2020-01-12 13:40:40.213000+00:00 | 2017-05-10 20:23:26.797000+00:00 | tensorflow|neural-network | ['https://arxiv.org/abs/1706.02677'] | 1 |
43,901,671 | <p>There is a decent discussion of this <a href="http://sebastianruder.com/optimizing-gradient-descent/index.html#parallelizinganddistributingsgd" rel="noreferrer">here</a> (not my content). Basically when you distribute SGD, you have to communicate gradients back and forth somehow between workers. This is inherently imperfect, and so your distributed SGD typically diverges from a sequential, single-worker SGD at least to some degree. It is also typically faster, so there is a trade off.</p>
<p><a href="https://arxiv.org/pdf/1412.6651.pdf" rel="noreferrer">[Zhang <em>et. al.</em>, 2015]</a> proposes one method for distributed SGD called elastic-averaged SGD. The paper goes through a stability analysis characterizing the behavior of the gradients under different communication constraints. It gets a little heavy, but it might shed some light on why you see this behavior.</p>
<p><strong>Edit:</strong> regarding whether the loss should be lower for the higher accuracy, it is going to depend on a couple of things. First, I am assuming that you are using softmax cross-entropy for your loss (as stated in the deep_cnn tutorial you linked), and assuming accuracy is the total number of correct predictions divided by the total number of samples. In this case, a lower loss <em>on the same dataset</em> should correlate to a higher accuracy. The emphasis is important.</p>
<p>If you are reporting loss during training but then report accuracy on your validation (or testing) dataset, it is possible for these two to be only loosely correlated. This is because the model is fitting (minimizing loss) to a certain subset of your total samples throughout the training process, and then tests against new samples that it has never seen before to verify that it generalizes well. The loss against this testing/validation set could be (and probably is) higher than the loss against the training set, so if the two numbers are being reported from different sets, you may not be able to draw comparisons like "loss for 1 GPU case should be lower since its accuracy is lower".</p>
<p>Second, if you are distributing the training then you are calculating losses across multiple workers (I believe), but only one accuracy at the end, again against a testing or validation set. Maybe the loss being reported is the best loss seen by any one worker, but overall the average losses were higher.</p>
<p>Basically I do not think we have enough information to decisively say why the loss and accuracy do not seem to correlate the way you expect, but there are a number of ways this could be happening, so I wouldn't dismiss it out of hand.</p> | 2017-05-10 19:56:35.713000+00:00 | 2017-05-10 20:52:51.353000+00:00 | 2017-05-10 20:52:51.353000+00:00 | null | 43,845,644 | <p>When I execute the cifar10 model as described at <a href="https://www.tensorflow.org/tutorials/deep_cnn" rel="noreferrer">https://www.tensorflow.org/tutorials/deep_cnn</a> I achieve 86% accuracy after approx 4 hours using a single GPU , when I utilize 2 GPU's the accuracy drops to 84% but reaching 84% accuracy is faster on 2 GPU's than 1. </p>
<p>My intuition is
that average_gradients function as defined at <a href="https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py" rel="noreferrer">https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py</a> returns a less accurate gradient value as an average of gradients will be less accurate than the actual gradient value. </p>
<p>If the gradients are less accurate then the parameters than control the function that is learned as part of training is less accurate. Looking at the code (<a href="https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py" rel="noreferrer">https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py</a>) why is averaging the gradients over multiple GPU's less accurate than computing the gradient on a single GPU ?</p>
<p>Is my intuition of averaging the gradients producing a less accurate value correct ?</p>
<p>Randomness in the model is described as : </p>
<pre><code>The images are processed as follows:
They are cropped to 24 x 24 pixels, centrally for evaluation or randomly for training.
They are approximately whitened to make the model insensitive to dynamic range.
For training, we additionally apply a series of random distortions to artificially increase the data set size:
Randomly flip the image from left to right.
Randomly distort the image brightness.
Randomly distort the image contrast.
</code></pre>
<p>src : <a href="https://www.tensorflow.org/tutorials/deep_cnn" rel="noreferrer">https://www.tensorflow.org/tutorials/deep_cnn</a></p>
<p>Does this have an effect on training accuracy ?</p>
<p>Update : </p>
<p>Attempting to investigate this further, the loss function value training with different number of GPU's.</p>
<pre><code>Training with 1 GPU : loss value : .7 , Accuracy : 86%
Training with 2 GPU's : loss value : .5 , Accuracy : 84%
</code></pre>
<p>Shouldn't the loss value be lower for higher for higher accuracy, not vice versa ?</p> | 2017-05-08 10:45:44.257000+00:00 | 2020-01-12 13:40:40.213000+00:00 | 2017-05-10 20:23:26.797000+00:00 | tensorflow|neural-network | ['http://sebastianruder.com/optimizing-gradient-descent/index.html#parallelizinganddistributingsgd', 'https://arxiv.org/pdf/1412.6651.pdf'] | 2 |
54,861,573 | <p>You can check from this recent paper (<a href="https://arxiv.org/pdf/1902.03524.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1902.03524.pdf</a>) that the CNN developed by Baidu is state of the art in image recognition problem.</p> | 2019-02-25 07:44:17.060000+00:00 | 2019-02-25 07:44:17.060000+00:00 | null | null | 54,849,750 | <p>I am newbie for Opencv. I want to create a object detection algorithm which tracks a football player. I want to know who is that player what's his jersey number. I want to know is the best way to find it. Which algorithm should i use. I have done one project which tracks user with color range in which i have converted each video image to <code>hsv</code>. But the challenge is coming to me is that after detecting player how can i find the jersey number. </p>
<p>Here is my Code - </p>
<pre><code>#Import libraries
import cv2
import os
import numpy as np
# import the necessary packages
from collections import deque
import numpy as np
import cv2
import imutils
import time
#Reading the video
vidcap = cv2.VideoCapture('football.mp4')
success,image = vidcap.read()
count = 0
success = True
idx = 0
#Read the video frame by frame
while success:
#converting into hsv image
hsv = cv2.cvtColor(image,cv2.COLOR_BGR2HSV)
#green range
lower_green = np.array([40,40, 40])
upper_green = np.array([70, 255, 255])
#blue range
lower_blue = np.array([110,50,50])
upper_blue = np.array([130,255,255])
#Red range
lower_red = np.array([0,31,255])
upper_red = np.array([176,255,255])
#white range
lower_white = np.array([0,0,0])
upper_white = np.array([0,0,255])
#Define a mask ranging from lower to uppper
mask = cv2.inRange(hsv, lower_green, upper_green)
#Do masking
res = cv2.bitwise_and(image, image, mask=mask)
#convert to hsv to gray
res_bgr = cv2.cvtColor(res,cv2.COLOR_HSV2BGR)
res_gray = cv2.cvtColor(res,cv2.COLOR_BGR2GRAY)
#Defining a kernel to do morphological operation in threshold image to
#get better output.
kernel = np.ones((13,13),np.uint8)
thresh = cv2.threshold(res_gray,127,255,cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
#find contours in threshold image
im2,contours,hierarchy = cv2.findContours(thresh,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)
prev = 0
font = cv2.FONT_HERSHEY_SIMPLEX
for c in contours:
x,y,w,h = cv2.boundingRect(c)
#Detect players
if(h>=(1.5)*w):
if(w>15 and h>= 15):
idx = idx+1
player_img = image[y:y+h,x:x+w]
player_hsv = cv2.cvtColor(player_img,cv2.COLOR_BGR2HSV)
#If player has blue jersy
mask1 = cv2.inRange(player_hsv, lower_blue, upper_blue)
res1 = cv2.bitwise_and(player_img, player_img, mask=mask1)
res1 = cv2.cvtColor(res1,cv2.COLOR_HSV2BGR)
res1 = cv2.cvtColor(res1,cv2.COLOR_BGR2GRAY)
nzCount = cv2.countNonZero(res1)
#If player has red jersy
mask2 = cv2.inRange(player_hsv, lower_red, upper_red)
res2 = cv2.bitwise_and(player_img, player_img, mask=mask2)
res2 = cv2.cvtColor(res2,cv2.COLOR_HSV2BGR)
res2 = cv2.cvtColor(res2,cv2.COLOR_BGR2GRAY)
nzCountred = cv2.countNonZero(res2)
if(nzCount >= 20):
#Mark blue jersy players as france
cv2.putText(image, 'France', (x-2, y-2), font, 0.8, (255,0,0), 2, cv2.LINE_AA)
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),3)
else:
pass
if(nzCountred>=20):
#Mark red jersy players as belgium
cv2.putText(image, 'Belgium', (x-2, y-2), font, 0.8, (0,0,255), 2, cv2.LINE_AA)
cv2.rectangle(image,(x,y),(x+w,y+h),(0,0,255),3)
else:
pass
if((h>=1 and w>=1) and (h<=30 and w<=30)):
player_img = image[y:y+h,x:x+w]
player_hsv = cv2.cvtColor(player_img,cv2.COLOR_BGR2HSV)
#white ball detection
mask1 = cv2.inRange(player_hsv, lower_white, upper_white)
res1 = cv2.bitwise_and(player_img, player_img, mask=mask1)
res1 = cv2.cvtColor(res1,cv2.COLOR_HSV2BGR)
res1 = cv2.cvtColor(res1,cv2.COLOR_BGR2GRAY)
nzCount = cv2.countNonZero(res1)
if(nzCount >= 3):
# detect football
cv2.putText(image, 'football', (x-2, y-2), font, 0.8, (0,255,0), 2, cv2.LINE_AA)
cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),3)
cv2.imwrite("./Cropped/frame%d.jpg" % count, res)
# print('Read a new frame: ', success) # save frame as JPEG file
count += 1
cv2.imshow('Match Detection',image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
success,image = vidcap.read()
vidcap.release()
cv2.destroyAllWindows()
</code></pre> | 2019-02-24 07:33:03.437000+00:00 | 2019-03-18 07:33:13.197000+00:00 | 2019-03-18 07:33:13.197000+00:00 | opencv|tensorflow|deep-learning | ['https://arxiv.org/pdf/1902.03524.pdf'] | 1 |
46,623,958 | <p>The model weights were ported from caffe, so it's in <a href="https://github.com/BVLC/caffe/wiki/Image-Format:-BGR-not-RGB" rel="noreferrer">BGR format</a>.</p>
<blockquote>
<p>Caffe uses a BGR color channel scheme for reading image files. This is
due to the underlying OpenCV implementation of imread. The assumption
of RGB is a common mistake.</p>
</blockquote>
<p>You can find the original caffe model weight files <a href="http://www.robots.ox.ac.uk/~vgg/research/very_deep/" rel="noreferrer">on VGG website</a>. This link can also be found on Keras documentation.</p>
<p>I think the second range would be the closest one. There's no scaling during training, but the authors have subtracted the mean value of the ILSVRC2014 training set. As stated in <a href="https://arxiv.org/pdf/1409.1556.pdf" rel="noreferrer">the original VGG paper</a>, section 2.1:</p>
<blockquote>
<p>The only preprocessing we do is subtracting the mean RGB value,
computed on the training set, from each pixel.</p>
</blockquote>
<p>This sentence is actually what <code>imagenet_utils.preprocess_input(mode='caffe')</code> does.</p>
<ol>
<li>Convert from RGB to BGR: because <code>keras.preprocessing.image.load_img()</code> loads images in RGB format, this conversion is required for VGG16 (and all models ported from caffe).</li>
<li>Subtract the mean BGR values: <code>(103.939, 116.779, 123.68)</code> is subtracted from the image array.</li>
</ol>
<p>The preprocessor is not used in <code>vgg16.py</code>. It's imported in the file so that users can use the preprocess function by calling <code>keras.applications.vgg16.preprocess_input(rgb_img_array)</code>, without caring about where model weights come from. The argument for <code>preprocess_input()</code> is always an image array in RGB format. If the model was trained with caffe, <code>preprocess_input()</code> will convert the array into BGR format.</p>
<p>Note that the function <code>preprocess_input()</code> is not intended to be called from <code>imagenet_utils</code> module. If you are using VGG16, call <code>keras.applications.vgg16.preprocess_input()</code> and the images will be converted to a suitable format and range that VGG16 was trained on. Similarly, if you are using Inception V3, call <code>keras.applications.inception_v3.preprocess_input()</code> and the images will be converted to the range that Inception V3 was trained on. </p> | 2017-10-07 18:53:57.537000+00:00 | 2017-10-07 19:05:04.620000+00:00 | 2017-10-07 19:05:04.620000+00:00 | null | 46,622,428 | <p>I'm trying to use a pretrained VGG 16 from keras. But I'm really unsure about what the input range should be. </p>
<p>Quick answer, which of these color orders?</p>
<ul>
<li>RGB </li>
<li>BGR</li>
</ul>
<p>And which range? </p>
<ul>
<li>0 to 255?</li>
<li>balanced from about -125 to about +130?</li>
<li>0 to 1?</li>
<li>-1 to 1?</li>
</ul>
<p>I notice <a href="https://github.com/fchollet/keras/blob/master/keras/applications/vgg16.py" rel="noreferrer">the file where the model is defined</a> imports an input preprocessor:</p>
<pre><code>from .imagenet_utils import preprocess_input
</code></pre>
<p>But this preprocessor is never used in the rest of the file.</p>
<p>Also, when I check the <a href="https://github.com/fchollet/keras/blob/master/keras/applications/imagenet_utils.py/#L11" rel="noreferrer">code for this preprocessor</a>, it has two modes: <code>caffe</code> and <code>tf</code> (tensorflow). </p>
<p>Each mode works differently. </p>
<p>Finally, I can't find consistent documentation on the internet. </p>
<p>So, what is the best range for working? To what range are the model weights trained?</p> | 2017-10-07 16:20:59.240000+00:00 | 2018-08-15 06:53:50.300000+00:00 | 2018-08-15 06:53:50.300000+00:00 | python|tensorflow|image-processing|keras|vgg-net | ['https://github.com/BVLC/caffe/wiki/Image-Format:-BGR-not-RGB', 'http://www.robots.ox.ac.uk/~vgg/research/very_deep/', 'https://arxiv.org/pdf/1409.1556.pdf'] | 3 |
18,039,959 | <p>Although I have absolutely no idea what you are talking about, I think these two pdf files contain some sort of explanation.</p>
<p><a href="http://www.scirp.org/journal/PaperDownload.aspx?paperID=22405" rel="nofollow">Link1</a></p>
<p><a href="http://arxiv.org/pdf/1301.5585.pdf" rel="nofollow">Link2</a></p>
<p>I just tried to answer it, because I know how frustrating it can be, when you something you really want is behind a paywall! Hope it helps.</p>
<p>Cheers!</p> | 2013-08-04 05:22:12.670000+00:00 | 2013-08-04 05:22:12.670000+00:00 | null | null | 18,039,896 | <p>I'm looking for an explanation of the Kameda-Weiner algorithm.</p>
<p>I found the paper "On the State Minimization of Nondeterministic Finite Automata" which, I assume, contains this, though it's unfortunately behind a paywall, and I'm just a hobbyist.</p>
<p>Can someone explain the algorithm, or point me to another source?</p> | 2013-08-04 05:10:30.900000+00:00 | 2020-05-03 16:17:10.317000+00:00 | 2013-08-04 14:43:55.010000+00:00 | algorithm|finite-automata|minimization | ['http://www.scirp.org/journal/PaperDownload.aspx?paperID=22405', 'http://arxiv.org/pdf/1301.5585.pdf'] | 2 |
45,646,108 | <p>I have to supplement the answer from @chrert.</p>
<p>The challenge of the <a href="https://arxiv.org/pdf/1406.4773.pdf" rel="nofollow noreferrer">paper</a> is that it has two loss functions (<code>l1</code> and <code>l2</code>) and you have to update SOME variables with the gradients related with both functions.</p>
<p>Before using <code>opt.apply_gradients</code>, you have to find pairs <code>(grad1, var1)</code> computed by <code>l1</code> and pairs <code>(grad2, var2)</code> computed by <code>l2</code>, and combine them where <code>var1.name==var2.name</code>.</p>
<p>A simple solution I found is that the pairs computed by <code>opt.compute_gradients</code> are of the same order.</p> | 2017-08-12 03:06:23.920000+00:00 | 2017-08-12 03:06:23.920000+00:00 | null | null | 45,453,569 | <p>I am implementing a <a href="https://arxiv.org/pdf/1406.4773" rel="nofollow noreferrer">paper</a> and the requirement is abstracted as follows. I have a CNN which has multiple layers, each of which is scoped. As the paper needs two example, I have used <code>opt.compute_gradients(loss)</code> and <code>tf.get_variable_scope().reuse_variables()</code> to obtain <code>[grad_and_vars1, grad_and_vars2]</code> (where <code>grad_and_vars</code> means gradient and variable pairs and the 1 represents first example).</p>
<p>How could I get the gradient and variable pairs which the corresponding variables are among the required scopes?</p>
<p>Thank you in advance.</p> | 2017-08-02 06:56:01.463000+00:00 | 2017-08-12 03:06:23.920000+00:00 | null | tensorflow | ['https://arxiv.org/pdf/1406.4773.pdf'] | 1 |
2,402,387 | <p>(Probably too complex for an interview question.)</p>
<p>(You can use O(N) time to check the min, max, sum, sumsq, etc. are equal first.)</p>
<p>Use <a href="http://arxiv.org/abs/0706.4107" rel="nofollow noreferrer">no-extra-space radix sort</a> to sort the two arrays in-place. O(N) time complexity, O(1) space.</p>
<p>Then compare them using the usual algorithm. O(N) time complexity, O(1) space.</p>
<p>(Provided (max − min) of the arrays is of O(N<sup>k</sup>) with a finite k.)</p> | 2010-03-08 15:29:00.970000+00:00 | 2010-03-08 16:29:32.230000+00:00 | 2010-03-08 16:29:32.230000+00:00 | null | 2,402,255 | <p>[Description] Given two integer arrays with the same length. Design an algorithm which can judge whether they're the same. The definition of "same" is that, if these two arrays were in sorted order, the elements in corresponding position should be the same.</p>
<pre><code>[Example]
<1 2 3 4> = <3 1 2 4>
<1 2 3 4> != <3 4 1 1>
</code></pre>
<p>[Limitation] The algorithm should require constant extra space, and O(n) running time.</p> | 2010-03-08 15:12:50.847000+00:00 | 2012-04-06 20:20:51.440000+00:00 | 2011-10-05 12:47:09.470000+00:00 | arrays|algorithm | ['http://arxiv.org/abs/0706.4107'] | 1 |
46,422,573 | <p>It all depends on the problem that you are trying to solve, the data available to you and the underlying domain. Lets get to it one by one:</p>
<p><strong>Type of Problem</strong><br>
There are multiple types of question answering systems, like one word answers based on extract the exact answer from various sentences, or returning the most similar sentence from a list of sentences based on the question asked by the user, using various similarity and embedding techniques. I think this paper : <a href="https://arxiv.org/abs/1506.03340" rel="nofollow noreferrer">Teaching Machines to Read and Comprehend</a> should be a good place to start getting an idea about such systems.</p>
<p><strong>Dataset</strong>
Next comes the dataset for such systems. Now there are various datasets available for question answering systems like :</p>
<ul>
<li><a href="https://rajpurkar.github.io/SQuAD-explorer/" rel="nofollow noreferrer">SQuAD dataset</a></li>
<li><a href="http://www.cs.cmu.edu/~ark/QA-data/" rel="nofollow noreferrer">QA dataset based on Wikipedia Articles</a></li>
<li><a href="https://research.fb.com/downloads/babi/" rel="nofollow noreferrer">Facebook bAbI dataset</a></li>
<li><a href="http://allenai.org/data.html" rel="nofollow noreferrer">AllenAI dataset based elementary Science question </a></li>
<li><a href="https://datasets.maluuba.com/NewsQA" rel="nofollow noreferrer">NewsQA datset</a></li>
</ul>
<p><strong>Methodologies</strong><br>
Well there are multiple ways to go about solving this problem. It would be difficult to list all of them in one answer, but I can provide you some references:</p>
<ul>
<li><a href="http://cs.umd.edu/~miyyer/data/deepqa.pdf" rel="nofollow noreferrer">Deep Learning for Question Answering</a></li>
<li><a href="https://www.slideshare.net/sujitpal/deep-learning-models-for-question-answering" rel="nofollow noreferrer">Various Deep Learning models on Question answering</a></li>
<li><a href="https://rajpurkar.github.io/SQuAD-explorer/" rel="nofollow noreferrer">SquAD dataset Leaderboard</a></li>
<li><a href="https://arxiv.org/abs/1507.02628" rel="nofollow noreferrer">Question Answering based on Word Alignment</a></li>
<li><a href="https://web.stanford.edu/class/cs224n/reports/2761224.pdf" rel="nofollow noreferrer">Attention Based Question Answering</a></li>
<li><a href="http://ai2-website.s3.amazonaws.com/publications/tableilp_ijcai_2016.pdf" rel="nofollow noreferrer">Reasoning-based QA</a></li>
</ul> | 2017-09-26 09:24:10.473000+00:00 | 2017-09-26 13:26:56.017000+00:00 | 2017-09-26 13:26:56.017000+00:00 | null | 46,419,272 | <p>I have a paragraph, system has to understand it and it should answer all the questions asked by the user. Please name the techniques and methodologies.</p> | 2017-09-26 06:29:30.510000+00:00 | 2017-09-26 13:26:56.017000+00:00 | null | machine-learning|nlp|deep-learning|artificial-intelligence | ['https://arxiv.org/abs/1506.03340', 'https://rajpurkar.github.io/SQuAD-explorer/', 'http://www.cs.cmu.edu/~ark/QA-data/', 'https://research.fb.com/downloads/babi/', 'http://allenai.org/data.html', 'https://datasets.maluuba.com/NewsQA', 'http://cs.umd.edu/~miyyer/data/deepqa.pdf', 'https://www.slideshare.net/sujitpal/deep-learning-models-for-question-answering', 'https://rajpurkar.github.io/SQuAD-explorer/', 'https://arxiv.org/abs/1507.02628', 'https://web.stanford.edu/class/cs224n/reports/2761224.pdf', 'http://ai2-website.s3.amazonaws.com/publications/tableilp_ijcai_2016.pdf'] | 12 |
49,901,128 | <p>Try using the TensorFlow Object Detection API. Link: <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="noreferrer">TensorFlow Object Detection API</a></p>
<p>And you can then customize your overall app behaviour accordingly, managing all your requirements (like for eg. showing a pop up with all the details of the object that's being detected after receiving some kind of callback when using the Tensoflow Object Detection API after the object has been detected successfully) as well as I believe that you can customise the TensorFlow object detection scenario part as per your need (Here, I am talking about the UI related part specifically in case of how you want your app to detect the object graphically).</p>
<p>To answer about the details like how it works offline and the resulting overall APK size etc. can be better understood from the links given below:</p>
<p>1] <a href="https://medium.com/@WuStangDan/step-by-step-tensorflow-object-detection-api-tutorial-part-1-selecting-a-model-a02b6aabe39e" rel="noreferrer">Step by Step TensorFlow Object Detection API Tutorial — Part 1: Selecting a Model</a></p>
<p>2] <a href="https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9" rel="noreferrer">How to train your own Object Detector with TensorFlow’s Object Detector API</a></p>
<p>As an overview, for detecting the objects offline you have to be limited (just to reduce your APK size) with your own set of data/objects (as you have mentioned that you have got a fixed object for detection, that's good) and then you have to train (can be trained locally as well as on cloud) this set of objects using a SSD-Mobilenet model. Then you will have your own trained model (in simpler words) of those set of objects which will give you a retrained_graph.pb file (this goes into your assets folder for your android project) which is the final outcome that includes the trick (in simpler words) to detect and classify the camera frames in real time thereby displaying the results (or object details) as per the info (or the set of data/objects) provided; without the need of any sort of internet connection. For instance, <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/DetectorActivity.java" rel="noreferrer">TF Detect</a> can track objects (from 80 categories) in the camera preview in real-time.</p>
<p>For further reference follow these links:</p>
<p>1] <a href="https://arxiv.org/abs/1409.4842" rel="noreferrer">Google Inception Model</a></p>
<p>2] <a href="https://github.com/tensorflow/models/tree/master/research/object_detection/" rel="noreferrer">Tensorflow Object Detection API Models</a></p>
<p>3] <a href="https://arxiv.org/abs/1611.10012" rel="noreferrer">Speed/Accuracy Trade-offs for Modern Convolutional Object Detectors</a></p>
<p>Also you can optimize (or compress) the retrained_graph.pb to optimized_graph.pb as this is the only major thing that would increase your APK file size. Long ago, when I tried detecting 5 different objects (using <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/ClassifierActivity.java" rel="noreferrer">TF Classify</a>), each object's folder was having about 650 photographs and the overall size of all the 5 folders (together) was about 230 mb and my retrained_graph.pb size was only 5.5 mb (which can further be optimized to optimized_graph.pb reducing its size even more).</p>
<p>For to start learning it from the beginner's level I would suggest you to once go through these codelab links and understand the working of these two projects as I too did so.</p>
<p>1] <a href="https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0" rel="noreferrer">TensorFlow For Poets</a></p>
<p>2] <a href="https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2/#0" rel="noreferrer">TensorFlow For Poets 2: Optimize for Mobile</a></p>
<p>Wishing you good luck.</p>
<p>The below link to TensorFlow GitHub (Master) includes almost everything:</p>
<p><a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android" rel="noreferrer">https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android</a></p> | 2018-04-18 13:34:24.987000+00:00 | 2018-04-19 07:27:11.003000+00:00 | 2018-04-19 07:27:11.003000+00:00 | null | 49,852,866 | <p>My requirement is to scan a fixed object. After recognizing that, I want to highlight the object and to display corresponding pre-feeded parameters accordingly, like height, width, circumference, etc.</p>
<p>This all, I want to do, without internet, using camera only.</p>
<p>Please, let me know if any solution / suggestion for this.</p>
<p>I have seen CraftAR SDK. It seems working as per my requirement, in order to recognize object, but it uses its server for storing images, which I don't want. As I want the static image, to be stored in app itself.</p> | 2018-04-16 08:34:12.663000+00:00 | 2018-09-27 05:34:39.347000+00:00 | 2018-04-16 08:56:58.227000+00:00 | android|image-recognition|object-recognition | ['https://github.com/tensorflow/models/tree/master/research/object_detection', 'https://medium.com/@WuStangDan/step-by-step-tensorflow-object-detection-api-tutorial-part-1-selecting-a-model-a02b6aabe39e', 'https://towardsdatascience.com/how-to-train-your-own-object-detector-with-tensorflows-object-detector-api-bec72ecfe1d9', 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/DetectorActivity.java', 'https://arxiv.org/abs/1409.4842', 'https://github.com/tensorflow/models/tree/master/research/object_detection/', 'https://arxiv.org/abs/1611.10012', 'https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/android/src/org/tensorflow/demo/ClassifierActivity.java', 'https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0', 'https://codelabs.developers.google.com/codelabs/tensorflow-for-poets-2/#0', 'https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android'] | 11 |
63,252,746 | <p>The operation you want to perform is called a <em>positional population count</em> on bytes. This is a well-known operation used in machine learning and some research has been done on <a href="https://arxiv.org/pdf/1911.02696.pdf" rel="nofollow noreferrer">fast algorithms</a> to solve this problem.</p>
<p>Unfortunately, the implementation of these algorithms is fairly involved. For this reason, I have developed a custom algorithm that is much simpler to implement but only yields roughly half the performance of the other other method. However, at measured 10 GB/s, it should still be a decent improvement over what you had previously.</p>
<p>The idea of this algorithm is to gather corresponding bits from groups of 32 bytes using <code>vpmovmskb</code> and then to take a scalar population count which is then added to the corresponding counter. This allows the dependency chains to be short and a consistent IPC of 3 to be reached.</p>
<p>Note that compared to your algorithm, my code flips the order of bits around. You can change this by editing which <code>counts</code> array elements the assembly code accesses if you want. However, in the interest of future readers, I'd like to leave this code with the more common convention where the least significant bit is considered bit 0.</p>
<h1>Source code</h1>
<p>The complete source code can be found <a href="https://github.com/clausecker/pospopcnt" rel="nofollow noreferrer">on github</a>. The author has meanwhile developed this algorithm idea into a <a href="https://github.com/clausecker/pospop" rel="nofollow noreferrer">portable library</a> that can be used like this:</p>
<pre><code>import "github.com/clausecker/pospop"
var counts [8]int
pospop.Count8(counts, buf) // add positional popcounts for buf to counts
</code></pre>
<p>The algorithm is provided in two variants and has been tested on a machine with a processor identified as “Intel(R) Xeon(R) W-2133 CPU @ 3.60GHz.”</p>
<h2>Positional Population Count 32 Bytes at a Time.</h2>
<p>The counters are kept in general purpose registers for best performance. Memory is prefetched well in advance for better streaming behaviour. The scalar tail is processed using a very simple <code>SHRL</code>/<code>ADCL</code> combination. A performance of up to 11 GB/s is achieved.</p>
<pre><code>#include "textflag.h"
// func PospopcntReg(counts *[8]int32, buf []byte)
TEXT ·PospopcntReg(SB),NOSPLIT,$0-32
MOVQ counts+0(FP), DI
MOVQ buf_base+8(FP), SI // SI = &buf[0]
MOVQ buf_len+16(FP), CX // CX = len(buf)
// load counts into register R8--R15
MOVL 4*0(DI), R8
MOVL 4*1(DI), R9
MOVL 4*2(DI), R10
MOVL 4*3(DI), R11
MOVL 4*4(DI), R12
MOVL 4*5(DI), R13
MOVL 4*6(DI), R14
MOVL 4*7(DI), R15
SUBQ $32, CX // pre-subtract 32 bit from CX
JL scalar
vector: VMOVDQU (SI), Y0 // load 32 bytes from buf
PREFETCHT0 384(SI) // prefetch some data
ADDQ $32, SI // advance SI past them
VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX
POPCNTL AX, AX // count population of AX
ADDL AX, R15 // add to counter
VPADDD Y0, Y0, Y0 // shift Y0 left by one place
VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX
POPCNTL AX, AX // count population of AX
ADDL AX, R14 // add to counter
VPADDD Y0, Y0, Y0 // shift Y0 left by one place
VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX
POPCNTL AX, AX // count population of AX
ADDL AX, R13 // add to counter
VPADDD Y0, Y0, Y0 // shift Y0 left by one place
VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX
POPCNTL AX, AX // count population of AX
ADDL AX, R12 // add to counter
VPADDD Y0, Y0, Y0 // shift Y0 left by one place
VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX
POPCNTL AX, AX // count population of AX
ADDL AX, R11 // add to counter
VPADDD Y0, Y0, Y0 // shift Y0 left by one place
VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX
POPCNTL AX, AX // count population of AX
ADDL AX, R10 // add to counter
VPADDD Y0, Y0, Y0 // shift Y0 left by one place
VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX
POPCNTL AX, AX // count population of AX
ADDL AX, R9 // add to counter
VPADDD Y0, Y0, Y0 // shift Y0 left by one place
VPMOVMSKB Y0, AX // move MSB of Y0 bytes to AX
POPCNTL AX, AX // count population of AX
ADDL AX, R8 // add to counter
SUBQ $32, CX
JGE vector // repeat as long as bytes are left
scalar: ADDQ $32, CX // undo last subtraction
JE done // if CX=0, there's nothing left
loop: MOVBLZX (SI), AX // load a byte from buf
INCQ SI // advance past it
SHRL $1, AX // CF=LSB, shift byte to the right
ADCL $0, R8 // add CF to R8
SHRL $1, AX
ADCL $0, R9 // add CF to R9
SHRL $1, AX
ADCL $0, R10 // add CF to R10
SHRL $1, AX
ADCL $0, R11 // add CF to R11
SHRL $1, AX
ADCL $0, R12 // add CF to R12
SHRL $1, AX
ADCL $0, R13 // add CF to R13
SHRL $1, AX
ADCL $0, R14 // add CF to R14
SHRL $1, AX
ADCL $0, R15 // add CF to R15
DECQ CX // mark this byte as done
JNE loop // and proceed if any bytes are left
// write R8--R15 back to counts
done: MOVL R8, 4*0(DI)
MOVL R9, 4*1(DI)
MOVL R10, 4*2(DI)
MOVL R11, 4*3(DI)
MOVL R12, 4*4(DI)
MOVL R13, 4*5(DI)
MOVL R14, 4*6(DI)
MOVL R15, 4*7(DI)
VZEROUPPER // restore SSE-compatibility
RET
</code></pre>
<h1>Positional Population Count 96 Bytes at a Time with CSA</h1>
<p>This variant performs all of the optimisations above but reduces 96 bytes to 64 using a single CSA step beforehand. As expected, this improves the performance by roughly 30% and achieves up to 16 GB/s.</p>
<pre><code>#include "textflag.h"
// func PospopcntRegCSA(counts *[8]int32, buf []byte)
TEXT ·PospopcntRegCSA(SB),NOSPLIT,$0-32
MOVQ counts+0(FP), DI
MOVQ buf_base+8(FP), SI // SI = &buf[0]
MOVQ buf_len+16(FP), CX // CX = len(buf)
// load counts into register R8--R15
MOVL 4*0(DI), R8
MOVL 4*1(DI), R9
MOVL 4*2(DI), R10
MOVL 4*3(DI), R11
MOVL 4*4(DI), R12
MOVL 4*5(DI), R13
MOVL 4*6(DI), R14
MOVL 4*7(DI), R15
SUBQ $96, CX // pre-subtract 32 bit from CX
JL scalar
vector: VMOVDQU (SI), Y0 // load 96 bytes from buf into Y0--Y2
VMOVDQU 32(SI), Y1
VMOVDQU 64(SI), Y2
ADDQ $96, SI // advance SI past them
PREFETCHT0 320(SI)
PREFETCHT0 384(SI)
VPXOR Y0, Y1, Y3 // first adder: sum
VPAND Y0, Y1, Y0 // first adder: carry out
VPAND Y2, Y3, Y1 // second adder: carry out
VPXOR Y2, Y3, Y2 // second adder: sum (full sum)
VPOR Y0, Y1, Y0 // full adder: carry out
VPMOVMSKB Y0, AX // MSB of carry out bytes
VPMOVMSKB Y2, DX // MSB of sum bytes
VPADDB Y0, Y0, Y0 // shift carry out bytes left
VPADDB Y2, Y2, Y2 // shift sum bytes left
POPCNTL AX, AX // carry bytes population count
POPCNTL DX, DX // sum bytes population count
LEAL (DX)(AX*2), AX // sum popcount plus 2x carry popcount
ADDL AX, R15
VPMOVMSKB Y0, AX // MSB of carry out bytes
VPMOVMSKB Y2, DX // MSB of sum bytes
VPADDB Y0, Y0, Y0 // shift carry out bytes left
VPADDB Y2, Y2, Y2 // shift sum bytes left
POPCNTL AX, AX // carry bytes population count
POPCNTL DX, DX // sum bytes population count
LEAL (DX)(AX*2), AX // sum popcount plus 2x carry popcount
ADDL AX, R14
VPMOVMSKB Y0, AX // MSB of carry out bytes
VPMOVMSKB Y2, DX // MSB of sum bytes
VPADDB Y0, Y0, Y0 // shift carry out bytes left
VPADDB Y2, Y2, Y2 // shift sum bytes left
POPCNTL AX, AX // carry bytes population count
POPCNTL DX, DX // sum bytes population count
LEAL (DX)(AX*2), AX // sum popcount plus 2x carry popcount
ADDL AX, R13
VPMOVMSKB Y0, AX // MSB of carry out bytes
VPMOVMSKB Y2, DX // MSB of sum bytes
VPADDB Y0, Y0, Y0 // shift carry out bytes left
VPADDB Y2, Y2, Y2 // shift sum bytes left
POPCNTL AX, AX // carry bytes population count
POPCNTL DX, DX // sum bytes population count
LEAL (DX)(AX*2), AX // sum popcount plus 2x carry popcount
ADDL AX, R12
VPMOVMSKB Y0, AX // MSB of carry out bytes
VPMOVMSKB Y2, DX // MSB of sum bytes
VPADDB Y0, Y0, Y0 // shift carry out bytes left
VPADDB Y2, Y2, Y2 // shift sum bytes left
POPCNTL AX, AX // carry bytes population count
POPCNTL DX, DX // sum bytes population count
LEAL (DX)(AX*2), AX // sum popcount plus 2x carry popcount
ADDL AX, R11
VPMOVMSKB Y0, AX // MSB of carry out bytes
VPMOVMSKB Y2, DX // MSB of sum bytes
VPADDB Y0, Y0, Y0 // shift carry out bytes left
VPADDB Y2, Y2, Y2 // shift sum bytes left
POPCNTL AX, AX // carry bytes population count
POPCNTL DX, DX // sum bytes population count
LEAL (DX)(AX*2), AX // sum popcount plus 2x carry popcount
ADDL AX, R10
VPMOVMSKB Y0, AX // MSB of carry out bytes
VPMOVMSKB Y2, DX // MSB of sum bytes
VPADDB Y0, Y0, Y0 // shift carry out bytes left
VPADDB Y2, Y2, Y2 // shift sum bytes left
POPCNTL AX, AX // carry bytes population count
POPCNTL DX, DX // sum bytes population count
LEAL (DX)(AX*2), AX // sum popcount plus 2x carry popcount
ADDL AX, R9
VPMOVMSKB Y0, AX // MSB of carry out bytes
VPMOVMSKB Y2, DX // MSB of sum bytes
POPCNTL AX, AX // carry bytes population count
POPCNTL DX, DX // sum bytes population count
LEAL (DX)(AX*2), AX // sum popcount plus 2x carry popcount
ADDL AX, R8
SUBQ $96, CX
JGE vector // repeat as long as bytes are left
scalar: ADDQ $96, CX // undo last subtraction
JE done // if CX=0, there's nothing left
loop: MOVBLZX (SI), AX // load a byte from buf
INCQ SI // advance past it
SHRL $1, AX // is bit 0 set?
ADCL $0, R8 // add it to R8
SHRL $1, AX // is bit 0 set?
ADCL $0, R9 // add it to R9
SHRL $1, AX // is bit 0 set?
ADCL $0, R10 // add it to R10
SHRL $1, AX // is bit 0 set?
ADCL $0, R11 // add it to R11
SHRL $1, AX // is bit 0 set?
ADCL $0, R12 // add it to R12
SHRL $1, AX // is bit 0 set?
ADCL $0, R13 // add it to R13
SHRL $1, AX // is bit 0 set?
ADCL $0, R14 // add it to R14
SHRL $1, AX // is bit 0 set?
ADCL $0, R15 // add it to R15
DECQ CX // mark this byte as done
JNE loop // and proceed if any bytes are left
// write R8--R15 back to counts
done: MOVL R8, 4*0(DI)
MOVL R9, 4*1(DI)
MOVL R10, 4*2(DI)
MOVL R11, 4*3(DI)
MOVL R12, 4*4(DI)
MOVL R13, 4*5(DI)
MOVL R14, 4*6(DI)
MOVL R15, 4*7(DI)
VZEROUPPER // restore SSE-compatibility
RET
</code></pre>
<h1>Benchmarks</h1>
<p>Here are benchmarks for the two algorithms and a naïve reference implementation in pure Go. Full benchmarks can be found in the github repository.</p>
<pre><code>BenchmarkReference/10-12 12448764 80.9 ns/op 123.67 MB/s
BenchmarkReference/32-12 4357808 258 ns/op 124.25 MB/s
BenchmarkReference/1000-12 151173 7889 ns/op 126.76 MB/s
BenchmarkReference/2000-12 68959 15774 ns/op 126.79 MB/s
BenchmarkReference/4000-12 36481 31619 ns/op 126.51 MB/s
BenchmarkReference/10000-12 14804 78917 ns/op 126.72 MB/s
BenchmarkReference/100000-12 1540 789450 ns/op 126.67 MB/s
BenchmarkReference/10000000-12 14 77782267 ns/op 128.56 MB/s
BenchmarkReference/1000000000-12 1 7781360044 ns/op 128.51 MB/s
BenchmarkReg/10-12 49255107 24.5 ns/op 407.42 MB/s
BenchmarkReg/32-12 186935192 6.40 ns/op 4998.53 MB/s
BenchmarkReg/1000-12 8778610 115 ns/op 8677.33 MB/s
BenchmarkReg/2000-12 5358495 208 ns/op 9635.30 MB/s
BenchmarkReg/4000-12 3385945 357 ns/op 11200.23 MB/s
BenchmarkReg/10000-12 1298670 901 ns/op 11099.24 MB/s
BenchmarkReg/100000-12 115629 8662 ns/op 11544.98 MB/s
BenchmarkReg/10000000-12 1270 916817 ns/op 10907.30 MB/s
BenchmarkReg/1000000000-12 12 93609392 ns/op 10682.69 MB/s
BenchmarkRegCSA/10-12 48337226 23.9 ns/op 417.92 MB/s
BenchmarkRegCSA/32-12 12843939 80.2 ns/op 398.86 MB/s
BenchmarkRegCSA/1000-12 7175629 150 ns/op 6655.70 MB/s
BenchmarkRegCSA/2000-12 3988408 295 ns/op 6776.20 MB/s
BenchmarkRegCSA/4000-12 3016693 382 ns/op 10467.41 MB/s
BenchmarkRegCSA/10000-12 1810195 642 ns/op 15575.65 MB/s
BenchmarkRegCSA/100000-12 191974 6229 ns/op 16053.40 MB/s
BenchmarkRegCSA/10000000-12 1622 698856 ns/op 14309.10 MB/s
BenchmarkRegCSA/1000000000-12 16 68540642 ns/op 14589.88 MB/s
</code></pre> | 2020-08-04 18:08:35.303000+00:00 | 2020-10-30 15:37:11.553000+00:00 | 2020-10-30 15:37:11.553000+00:00 | null | 63,248,047 | <p>This post is related to <a href="https://stackoverflow.com/questions/63242918/golang-assembly-implement-of-mm-add-epi32/">Golang assembly implement of _mm_add_epi32</a> , where it adds paired elements in two <code>[8]int32</code> list, and returns the updated first one.</p>
<p>According to pprof profile, I found passing <code>[8]int32</code> is expensive, so I think passing pointer of the list is much cheaper and the bech result verified this. Here's the go version:</p>
<pre><code>func __mm_add_epi32_inplace_purego(x, y *[8]int32) {
(*x)[0] += (*y)[0]
(*x)[1] += (*y)[1]
(*x)[2] += (*y)[2]
(*x)[3] += (*y)[3]
(*x)[4] += (*y)[4]
(*x)[5] += (*y)[5]
(*x)[6] += (*y)[6]
(*x)[7] += (*y)[7]
}
</code></pre>
<p>This function is called in two levels of loop.</p>
<p>The algorithm computes a <em>position population count</em> over an array of bytes.</p>
<p>Thanks advice from @fuz , I know that writing whole algorithm in assembly is the best choice and makes sense, but it's beyond my ability since I never learn programming in assembly.</p>
<p>However, it should be easy to optimize the inner loop with assembly:</p>
<pre><code>counts := make([][8]int32, numRowBytes)
for i, b = range byteSlice {
if b == 0 { // more than half of elements in byteSlice is 0.
continue
}
expand = _expand_byte[b]
__mm_add_epi32_inplace_purego(&counts[i], expand)
}
// expands a byte into its bits
var _expand_byte = [256]*[8]int32{
&[8]int32{0, 0, 0, 0, 0, 0, 0, 0},
&[8]int32{0, 0, 0, 0, 0, 0, 0, 1},
&[8]int32{0, 0, 0, 0, 0, 0, 1, 0},
&[8]int32{0, 0, 0, 0, 0, 0, 1, 1},
&[8]int32{0, 0, 0, 0, 0, 1, 0, 0},
...
}
</code></pre>
<p>Can you help to write an assembly version of <code>__mm_add_epi32_inplace_purego</code> (this is enough for me), or even the whole loop? Thank you in advance.</p> | 2020-08-04 13:32:51.430000+00:00 | 2021-01-14 17:00:56.507000+00:00 | 2021-01-14 17:00:56.507000+00:00 | go|assembly|x86|simd|avx | ['https://arxiv.org/pdf/1911.02696.pdf', 'https://github.com/clausecker/pospopcnt', 'https://github.com/clausecker/pospop'] | 3 |
38,096,523 | <p>As you pointed out, <a href="https://arxiv.org/abs/1410.5401" rel="nofollow">Neural Turing Machines</a> seem to working well to learn the basic algorithms. For instance, the repeat copy task which has been implemented in the paper, might tell us that NTM can learn the algorithm itself. As of now, NTMs have been used only for simple tasks so understanding its scope by using the pow(x,n) will be interesting given that repeat copy works well. I suggest reading <a href="https://arxiv.org/pdf/1505.00521v3.pdf" rel="nofollow">Reinforcement Learning Neural Turing Machines - Revised</a> for a deeper understanding. </p>
<p>Also, recent developments in the area of <code>Memory Networks</code> empower us to perform more complicated tasks. Hence, to make a neural network understand pow(x,n) might be possible. So go ahead and give it a shot!</p> | 2016-06-29 10:09:20.320000+00:00 | 2016-06-29 10:09:20.320000+00:00 | null | null | 30,448,277 | <p>I am currently in the process of learning neural networks and can understand basic examples like AND, OR, Addition, Multiplication, etc.</p>
<p>Right now, I am trying to build a neural network that takes two inputs x and n, and computes pow(x, n). And, this would require the neural network to have some form of a loop, and I am not sure how I can model a network with a loop</p>
<p>Can this sort of computation be modelled on a neural network? I am assuming it is possible.. based on the recently released paper(Neural Turing Machine), but not sure how. Any pointers on this would be very helpful.</p>
<p>Thanks!</p> | 2015-05-26 01:59:54.517000+00:00 | 2016-06-29 10:09:20.320000+00:00 | 2015-05-26 07:52:55.330000+00:00 | machine-learning|neural-network | ['https://arxiv.org/abs/1410.5401', 'https://arxiv.org/pdf/1505.00521v3.pdf'] | 2 |
21,872,498 | <p>Here are some references I provided as part of an answer <a href="https://math.stackexchange.com/a/680875/66696">here</a>.
I think they address the actual problem you are trying to solve:</p>
<ul>
<li><a href="http://www.shogun-toolbox.org/static/notebook/current/logdet.html" rel="nofollow noreferrer">notes</a> for an implementation in the Shogun library</li>
<li>Erlend Aune, Daniel P. Simpson: <em>Parameter estimation in high dimensional Gaussian distributions</em>, particularly section 2.1 (<a href="http://arxiv.org/abs/1105.5256" rel="nofollow noreferrer">arxiv:1105.5256</a>)</li>
<li>Ilse C.F. Ipsen, Dean J. Lee: <em>Determinant Approximations</em> (<a href="http://arxiv.org/abs/1105.0437" rel="nofollow noreferrer">arxiv:1105.0437</a>)</li>
<li>Arnold Reusken: <em>Approximation of the determinant of large sparse symmetric positive definite matrices</em> (<a href="http://arxiv.org/abs/hep-lat/0008007" rel="nofollow noreferrer">arxiv:hep-lat/0008007</a>)</li>
</ul>
<p>Quoting from the Shogun notes:</p>
<blockquote>
<p>The usual technique for computing the log-determinant term in the likelihood expression relies on Cholesky factorization of the matrix, i.e. Σ=LLT, (L is the lower triangular Cholesky factor) and then using the diagonal entries of the factor to compute log(det(Σ))=2∑ni=1log(Lii). However, for sparse matrices, as covariance matrices usually are, the Cholesky factors often suffer from fill-in phenomena - they turn out to be not so sparse themselves. Therefore, for large dimensions this technique becomes infeasible because of a massive memory requirement for storing all these irrelevant non-diagonal co-efficients of the factor. While ordering techniques have been developed to permute the rows and columns beforehand in order to reduce fill-in, e.g. approximate minimum degree (AMD) reordering, these techniques depend largely on the sparsity pattern and therefore not guaranteed to give better result.</p>
<p>Recent research shows that using a number of techniques from complex analysis, numerical linear algebra and greedy graph coloring, we can, however, approximate the log-determinant up to an arbitrary precision [Aune et. al., 2012]. The main trick lies within the observation that we can write log(det(Σ)) as trace(log(Σ)), where log(Σ) is the matrix-logarithm.</p>
</blockquote> | 2014-02-19 06:11:32.040000+00:00 | 2014-02-19 06:11:32.040000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 19,107,617 | <p>I am trying to figure out the fastest method to find the determinant of sparse symmetric and real matrices in python. using scipy <code>sparse</code> module but really surprised that there is no determinant function. I am aware I could use LU factorization to compute determinant but don't see a easy way to do it because the return of <code>scipy.sparse.linalg.splu</code> is an object and instantiating a dense L and U matrix is not worth it - I may as well do <code>sp.linalg.det(A.todense())</code> where <code>A</code> is my scipy sparse matrix. </p>
<p>I am also a bit surprised why others have not faced the problem of efficient determinant computation within scipy. How would one use <code>splu</code> to compute determinant? </p>
<p>I looked into <code>pySparse</code> and <code>scikits.sparse.chlmod</code>. The latter is not practical right now for me - needs package installations and also not sure sure how fast the code is before I go into all the trouble.
Any solutions? Thanks in advance. </p> | 2013-10-01 03:48:08.430000+00:00 | 2021-04-27 19:47:46.900000+00:00 | 2013-10-31 20:12:07.523000+00:00 | python|numpy|scipy|linear-algebra|sparse-matrix | ['https://math.stackexchange.com/a/680875/66696', 'http://www.shogun-toolbox.org/static/notebook/current/logdet.html', 'http://arxiv.org/abs/1105.5256', 'http://arxiv.org/abs/1105.0437', 'http://arxiv.org/abs/hep-lat/0008007'] | 5 |
35,337,827 | <p>The paper is not accurately reflecting the model. If you download the source from arxiv it has an accurate model description as model.txt, and the names in there correlate strongly with the names in the released model.</p>
<p>To answer your first question, <code>sess.graph.get_operations()</code> gives you a list of operations. For an op, <code>op.name</code> gives you the name and <code>op.values()</code> gives you a list of tensors it produces (in the inception-v3 model, all tensor names are the op name with a ":0" appended to it, so <code>pool_3:0</code> is the tensor produced by the final pooling op.)</p> | 2016-02-11 11:15:22.920000+00:00 | 2017-02-27 15:41:25.320000+00:00 | 2017-02-27 15:41:25.320000+00:00 | null | 35,336,648 | <p>The graph object in Tensorflow has a method called "get_tensor_by_name(name)". Is there anyway to get a list of valid tensor names?</p>
<p>If not, does anyone know the valid names for the pretrained model inception-v3 <a href="https://www.tensorflow.org/versions/v0.6.0/tutorials/image_recognition/index.html">from here</a>? From their example, pool_3, is one valid tensor but a list of all of them would be nice. I looked at <a href="http://arxiv.org/abs/1512.00567">the paper referred to</a> and some of the layers seems to correspond to the sizes in table 1 but not all of them.</p> | 2016-02-11 10:24:57.010000+00:00 | 2019-11-13 22:11:04.630000+00:00 | null | python|tensorflow | [] | 0 |
53,352,240 | <p>The MSCOCO paper describes that the dataset has actually 91 classes but in the 2014 dataset they released only a subset of 80 classes because they didn't annotated the segmentation of the remaining 11 classes. Seems that tensorflow models were trained using 90 classes.</p>
<p>MSCOCO paper: <a href="https://arxiv.org/pdf/1405.0312.pdf" rel="noreferrer">https://arxiv.org/pdf/1405.0312.pdf</a></p>
<p>From appendix II: "Our dataset contains 91 object categories (the 2014 release contains segmentation masks for 80 of these categories)."</p>
<p>-Ricardo</p> | 2018-11-17 14:39:18.653000+00:00 | 2018-11-17 14:39:18.653000+00:00 | null | null | 50,665,110 | <p>The labelmaps of Tensorflows object_detection project contain 90 classes, although COCO has only 80 categories.
Therefore the parameter <code>num_classes</code> in all sample configs is set to 90.</p>
<p>If i now download and use the COCO 2017 dataset, do I need to set this parameter to 80 or leave it to 90?</p>
<p>If 80 (as COCO has 80 classes) I need to adjust the labelmap, so the standard <code>mscoco_label_map.pbtxt</code> is not correct, right?</p>
<p>I would be really thankful if someone could shine a light on this one :)</p>
<p>Here are the standard 80 COCO classes:</p>
<pre><code>person
bicycle
car
motorbike
aeroplane
bus
train
truck
boat
traffic light
fire hydrant
stop sign
parking meter
bench
bird
cat
dog
horse
sheep
cow
elephant
bear
zebra
giraffe
backpack
umbrella
handbag
tie
suitcase
frisbee
skis
snowboard
sports ball
kite
baseball bat
baseball glove
skateboard
surfboard
tennis racket
bottle
wine glass
cup
fork
knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot dog
pizza
donut
cake
chair
sofa
pottedplant
bed
diningtable
toilet
tvmonitor
laptop
mouse
remote
keyboard
cell phone
microwave
oven
toaster
sink
refrigerator
book
clock
vase
scissors
teddy bear
hair drier
toothbrush
</code></pre>
<p>And here is the MS COCO labelmap of Tensorflows object_detection API:</p>
<pre><code>item {
name: "/m/01g317"
id: 1
display_name: "person"
}
item {
name: "/m/0199g"
id: 2
display_name: "bicycle"
}
item {
name: "/m/0k4j"
id: 3
display_name: "car"
}
item {
name: "/m/04_sv"
id: 4
display_name: "motorcycle"
}
item {
name: "/m/05czz6l"
id: 5
display_name: "airplane"
}
item {
name: "/m/01bjv"
id: 6
display_name: "bus"
}
item {
name: "/m/07jdr"
id: 7
display_name: "train"
}
item {
name: "/m/07r04"
id: 8
display_name: "truck"
}
item {
name: "/m/019jd"
id: 9
display_name: "boat"
}
item {
name: "/m/015qff"
id: 10
display_name: "traffic light"
}
item {
name: "/m/01pns0"
id: 11
display_name: "fire hydrant"
}
item {
name: "/m/02pv19"
id: 13
display_name: "stop sign"
}
item {
name: "/m/015qbp"
id: 14
display_name: "parking meter"
}
item {
name: "/m/0cvnqh"
id: 15
display_name: "bench"
}
item {
name: "/m/015p6"
id: 16
display_name: "bird"
}
item {
name: "/m/01yrx"
id: 17
display_name: "cat"
}
item {
name: "/m/0bt9lr"
id: 18
display_name: "dog"
}
item {
name: "/m/03k3r"
id: 19
display_name: "horse"
}
item {
name: "/m/07bgp"
id: 20
display_name: "sheep"
}
item {
name: "/m/01xq0k1"
id: 21
display_name: "cow"
}
item {
name: "/m/0bwd_0j"
id: 22
display_name: "elephant"
}
item {
name: "/m/01dws"
id: 23
display_name: "bear"
}
item {
name: "/m/0898b"
id: 24
display_name: "zebra"
}
item {
name: "/m/03bk1"
id: 25
display_name: "giraffe"
}
item {
name: "/m/01940j"
id: 27
display_name: "backpack"
}
item {
name: "/m/0hnnb"
id: 28
display_name: "umbrella"
}
item {
name: "/m/080hkjn"
id: 31
display_name: "handbag"
}
item {
name: "/m/01rkbr"
id: 32
display_name: "tie"
}
item {
name: "/m/01s55n"
id: 33
display_name: "suitcase"
}
item {
name: "/m/02wmf"
id: 34
display_name: "frisbee"
}
item {
name: "/m/071p9"
id: 35
display_name: "skis"
}
item {
name: "/m/06__v"
id: 36
display_name: "snowboard"
}
item {
name: "/m/018xm"
id: 37
display_name: "sports ball"
}
item {
name: "/m/02zt3"
id: 38
display_name: "kite"
}
item {
name: "/m/03g8mr"
id: 39
display_name: "baseball bat"
}
item {
name: "/m/03grzl"
id: 40
display_name: "baseball glove"
}
item {
name: "/m/06_fw"
id: 41
display_name: "skateboard"
}
item {
name: "/m/019w40"
id: 42
display_name: "surfboard"
}
item {
name: "/m/0dv9c"
id: 43
display_name: "tennis racket"
}
item {
name: "/m/04dr76w"
id: 44
display_name: "bottle"
}
item {
name: "/m/09tvcd"
id: 46
display_name: "wine glass"
}
item {
name: "/m/08gqpm"
id: 47
display_name: "cup"
}
item {
name: "/m/0dt3t"
id: 48
display_name: "fork"
}
item {
name: "/m/04ctx"
id: 49
display_name: "knife"
}
item {
name: "/m/0cmx8"
id: 50
display_name: "spoon"
}
item {
name: "/m/04kkgm"
id: 51
display_name: "bowl"
}
item {
name: "/m/09qck"
id: 52
display_name: "banana"
}
item {
name: "/m/014j1m"
id: 53
display_name: "apple"
}
item {
name: "/m/0l515"
id: 54
display_name: "sandwich"
}
item {
name: "/m/0cyhj_"
id: 55
display_name: "orange"
}
item {
name: "/m/0hkxq"
id: 56
display_name: "broccoli"
}
item {
name: "/m/0fj52s"
id: 57
display_name: "carrot"
}
item {
name: "/m/01b9xk"
id: 58
display_name: "hot dog"
}
item {
name: "/m/0663v"
id: 59
display_name: "pizza"
}
item {
name: "/m/0jy4k"
id: 60
display_name: "donut"
}
item {
name: "/m/0fszt"
id: 61
display_name: "cake"
}
item {
name: "/m/01mzpv"
id: 62
display_name: "chair"
}
item {
name: "/m/02crq1"
id: 63
display_name: "couch"
}
item {
name: "/m/03fp41"
id: 64
display_name: "potted plant"
}
item {
name: "/m/03ssj5"
id: 65
display_name: "bed"
}
item {
name: "/m/04bcr3"
id: 67
display_name: "dining table"
}
item {
name: "/m/09g1w"
id: 70
display_name: "toilet"
}
item {
name: "/m/07c52"
id: 72
display_name: "tv"
}
item {
name: "/m/01c648"
id: 73
display_name: "laptop"
}
item {
name: "/m/020lf"
id: 74
display_name: "mouse"
}
item {
name: "/m/0qjjc"
id: 75
display_name: "remote"
}
item {
name: "/m/01m2v"
id: 76
display_name: "keyboard"
}
item {
name: "/m/050k8"
id: 77
display_name: "cell phone"
}
item {
name: "/m/0fx9l"
id: 78
display_name: "microwave"
}
item {
name: "/m/029bxz"
id: 79
display_name: "oven"
}
item {
name: "/m/01k6s3"
id: 80
display_name: "toaster"
}
item {
name: "/m/0130jx"
id: 81
display_name: "sink"
}
item {
name: "/m/040b_t"
id: 82
display_name: "refrigerator"
}
item {
name: "/m/0bt_c3"
id: 84
display_name: "book"
}
item {
name: "/m/01x3z"
id: 85
display_name: "clock"
}
item {
name: "/m/02s195"
id: 86
display_name: "vase"
}
item {
name: "/m/01lsmm"
id: 87
display_name: "scissors"
}
item {
name: "/m/0kmg4"
id: 88
display_name: "teddy bear"
}
item {
name: "/m/03wvsk"
id: 89
display_name: "hair drier"
}
item {
name: "/m/012xff"
id: 90
display_name: "toothbrush"
}
</code></pre>
<p>Edit: after closely comparing the two lists it is clear that they both contain the same 80 classes but the label map tensorflow uses by default misses 10 class ids, seemingly random distributed.</p>
<p>Has anybody an idea why that is?</p> | 2018-06-03 09:50:09.270000+00:00 | 2019-03-14 08:57:53.703000+00:00 | 2018-06-05 05:34:09.657000+00:00 | tensorflow|tensorflow-datasets|tfrecord | ['https://arxiv.org/pdf/1405.0312.pdf'] | 1 |
48,306,714 | <h2>Dynamic placeholders</h2>
<p>Tensorflow allows to have <em>multiple</em> dynamic (a.k.a. <code>None</code>) dimensions in placeholders. The engine won't be able to ensure correctness while the graph is built, hence the client is responsible for feeding the correct input, but it provides a lot of flexibility.</p>
<p>So I'm going from...</p>
<pre class="lang-py prettyprint-override"><code>x = tf.placeholder(tf.float32, shape=[None, N*M*P])
y_ = tf.placeholder(tf.float32, shape=[None, N*M*P, 3])
...
x_image = tf.reshape(x, [-1, N, M, P, 1])
</code></pre>
<p>to...</p>
<pre class="lang-py prettyprint-override"><code># Nearly all dimensions are dynamic
x_image = tf.placeholder(tf.float32, shape=[None, None, None, None, 1])
label = tf.placeholder(tf.float32, shape=[None, None, 3])
</code></pre>
<p>Since you intend to reshape the input to 5D anyway, so why don't use 5D in <code>x_image</code> right from the start. At this point, the second dimension of <code>label</code> is arbitrary, but we <em>promise</em> tensorflow that it will match with <code>x_image</code>.</p>
<h2>Dynamic shapes in deconvolution</h2>
<p>Next, the nice thing about <a href="https://www.tensorflow.org/api_docs/python/tf/nn/conv3d_transpose" rel="noreferrer"><code>tf.nn.conv3d_transpose</code></a> is that its output shape can be dynamic. So instead of this:</p>
<pre class="lang-py prettyprint-override"><code># Hard-coded output shape
DeConnv1 = tf.nn.conv3d_transpose(layer1, w, output_shape=[1,32,32,7,1], ...)
</code></pre>
<p>... you can do this:</p>
<pre class="lang-py prettyprint-override"><code># Dynamic output shape
DeConnv1 = tf.nn.conv3d_transpose(layer1, w, output_shape=tf.shape(x_image), ...)
</code></pre>
<p>This way the transpose convolution can be applied to <em>any</em> image and the result will take the shape of <code>x_image</code> that was actually passed in at runtime. </p>
<p>Note that static shape of <code>x_image</code> is <code>(?, ?, ?, ?, 1)</code>.</p>
<h2>All-Convolutional network</h2>
<p>Final and most important piece of the puzzle is to make <strong>the whole network</strong> convolutional, and that includes your final dense layer too. Dense layer <em>must</em> define its dimensions statically, which forces the whole neural network fix input image dimensions.</p>
<p>Luckily for us, Springenberg at al describe a way to replace an FC layer with a CONV layer in <a href="https://arxiv.org/abs/1412.6806" rel="noreferrer">"Striving for Simplicity: The All Convolutional Net"</a> paper. I'm going to use a convolution with 3 <code>1x1x1</code> filters (see also <a href="https://stats.stackexchange.com/q/194142/130598">this question</a>):</p>
<pre class="lang-py prettyprint-override"><code>final_conv = conv3d_s1(final, weight_variable([1, 1, 1, 1, 3]))
y = tf.reshape(final_conv, [-1, 3])
</code></pre>
<p>If we ensure that <code>final</code> has the same dimensions as <code>DeConnv1</code> (and others), it'll make <code>y</code> right the shape we want: <code>[-1, N * M * P, 3]</code>.</p>
<h2>Combining it all together</h2>
<p>Your network is pretty large, but all deconvolutions basically follow the same pattern, so I've simplified my <em>proof-of-concept</em> code to just one deconvolution. The goal is just to show what kind of network is able to handle images of arbitrary size. Final remark: image dimensions can vary <em>between</em> batches, but within one batch they have to be the same.</p>
<p>The full code:</p>
<pre class="lang-py prettyprint-override"><code>sess = tf.InteractiveSession()
def conv3d_dilation(tempX, tempFilter):
return tf.layers.conv3d(tempX, filters=tempFilter, kernel_size=[3, 3, 1], strides=1, padding='SAME', dilation_rate=2)
def conv3d(tempX, tempW):
return tf.nn.conv3d(tempX, tempW, strides=[1, 2, 2, 2, 1], padding='SAME')
def conv3d_s1(tempX, tempW):
return tf.nn.conv3d(tempX, tempW, strides=[1, 1, 1, 1, 1], padding='SAME')
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def max_pool_3x3(x):
return tf.nn.max_pool3d(x, ksize=[1, 3, 3, 3, 1], strides=[1, 2, 2, 2, 1], padding='SAME')
x_image = tf.placeholder(tf.float32, shape=[None, None, None, None, 1])
label = tf.placeholder(tf.float32, shape=[None, None, 3])
W_conv1 = weight_variable([3, 3, 1, 1, 32])
h_conv1 = conv3d(x_image, W_conv1)
# second convolution
W_conv2 = weight_variable([3, 3, 4, 32, 64])
h_conv2 = conv3d_s1(h_conv1, W_conv2)
# third convolution path 1
W_conv3_A = weight_variable([1, 1, 1, 64, 64])
h_conv3_A = conv3d_s1(h_conv2, W_conv3_A)
# third convolution path 2
W_conv3_B = weight_variable([1, 1, 1, 64, 64])
h_conv3_B = conv3d_s1(h_conv2, W_conv3_B)
# fourth convolution path 1
W_conv4_A = weight_variable([3, 3, 1, 64, 96])
h_conv4_A = conv3d_s1(h_conv3_A, W_conv4_A)
# fourth convolution path 2
W_conv4_B = weight_variable([1, 7, 1, 64, 64])
h_conv4_B = conv3d_s1(h_conv3_B, W_conv4_B)
# fifth convolution path 2
W_conv5_B = weight_variable([1, 7, 1, 64, 64])
h_conv5_B = conv3d_s1(h_conv4_B, W_conv5_B)
# sixth convolution path 2
W_conv6_B = weight_variable([3, 3, 1, 64, 96])
h_conv6_B = conv3d_s1(h_conv5_B, W_conv6_B)
# concatenation
layer1 = tf.concat([h_conv4_A, h_conv6_B], 4)
w = tf.Variable(tf.constant(1., shape=[2, 2, 4, 1, 192]))
DeConnv1 = tf.nn.conv3d_transpose(layer1, filter=w, output_shape=tf.shape(x_image), strides=[1, 2, 2, 2, 1], padding='SAME')
final = DeConnv1
final_conv = conv3d_s1(final, weight_variable([1, 1, 1, 1, 3]))
y = tf.reshape(final_conv, [-1, 3])
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=label, logits=y))
print('x_image:', x_image)
print('DeConnv1:', DeConnv1)
print('final_conv:', final_conv)
def try_image(N, M, P, B=1):
batch_x = np.random.normal(size=[B, N, M, P, 1])
batch_y = np.ones([B, N * M * P, 3]) / 3.0
deconv_val, final_conv_val, loss = sess.run([DeConnv1, final_conv, cross_entropy],
feed_dict={x_image: batch_x, label: batch_y})
print(deconv_val.shape)
print(final_conv.shape)
print(loss)
print()
tf.global_variables_initializer().run()
try_image(32, 32, 7)
try_image(16, 16, 3)
try_image(16, 16, 3, 2)
</code></pre> | 2018-01-17 17:24:03.947000+00:00 | 2018-01-17 17:24:03.947000+00:00 | null | null | 48,230,031 | <p>I am attempting to create a deep CNN that can classify each individual pixel in an image. I am replicating architecture from the image below taken from <a href="https://github.com/dhasl002/Research-DeepLearning/blob/master/DEEP.pdf" rel="noreferrer">this</a> paper. In the paper it is mentioned that deconvolutions are used so that any size of input is possible. This can be seen in the image below. </p>
<p><a href="https://github.com/dhasl002/Research-DeepLearning" rel="noreferrer">Github Repository</a></p>
<p><a href="https://i.stack.imgur.com/VgITR.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VgITR.png" alt="enter image description here"></a></p>
<p>Currently, I have hard coded my model to accept images of size 32x32x7, but I would like to accept any size of input. <strong>What changes would I need to make to my code to accept variable sized input?</strong></p>
<pre><code> x = tf.placeholder(tf.float32, shape=[None, 32*32*7])
y_ = tf.placeholder(tf.float32, shape=[None, 32*32*7, 3])
...
DeConnv1 = tf.nn.conv3d_transpose(layer1, filter = w, output_shape = [1,32,32,7,1], strides = [1,2,2,2,1], padding = 'SAME')
...
final = tf.reshape(final, [1, 32*32*7])
W_final = weight_variable([32*32*7,32*32*7,3])
b_final = bias_variable([32*32*7,3])
final_conv = tf.tensordot(final, W_final, axes=[[1], [1]]) + b_final
</code></pre> | 2018-01-12 16:11:59.610000+00:00 | 2018-01-17 17:24:03.947000+00:00 | 2018-01-12 16:19:08.013000+00:00 | python|tensorflow|deep-learning|conv-neural-network|deconvolution | ['https://www.tensorflow.org/api_docs/python/tf/nn/conv3d_transpose', 'https://arxiv.org/abs/1412.6806', 'https://stats.stackexchange.com/q/194142/130598'] | 3 |
17,574,037 | <p>Chapters 2 and 3 of <a href="https://rads.stackoverflow.com/amzn/click/com/1107002176" rel="nofollow noreferrer" rel="nofollow noreferrer">Nielsen and Chuang</a> should give you the background you need. </p>
<p><a href="https://rads.stackoverflow.com/amzn/click/com/0738202967" rel="nofollow noreferrer" rel="nofollow noreferrer">The Feynman Lectures on Computation</a> provides an easy to understand introduction to CS for physicists.</p>
<p>Beyond that, you can read some of <a href="http://arxiv.org/abs/quant-ph/0110143" rel="nofollow noreferrer">Kitaev's Arxiv papers</a> to see whether you're a genius.</p>
<p>SICP may not be directly applicable, but it may very well be the best programming book ever written, so it's <strong>always</strong> useful!</p> | 2013-07-10 14:58:15.847000+00:00 | 2013-07-10 14:58:15.847000+00:00 | null | null | 17,568,480 | <p>I browsed some books about quantum computers and there is some concepts from computer science (for example, Turing machine) except quantum physics and mathematics. So, if I want to study quantum computing what I should know from computer science? Is it useful to read SICP, for example?</p> | 2013-07-10 10:38:42.323000+00:00 | 2016-05-12 18:47:33.043000+00:00 | null | computer-science|quantum-computing | ['https://rads.stackoverflow.com/amzn/click/com/1107002176', 'https://rads.stackoverflow.com/amzn/click/com/0738202967', 'http://arxiv.org/abs/quant-ph/0110143'] | 3 |
9,692,151 | <p>For handwritten character recognition you need</p>
<ol>
<li>many training examples (maybe you should create distortions of your training set)</li>
<li>softmax activation function in the output layer</li>
<li>cross entropy error function</li>
<li>training with <strong>stochastic</strong> gradient descent</li>
<li>a bias in each layer</li>
</ol>
<p>A good test problem is the handwritten digit data set <a href="http://yann.lecun.com/exdb/mnist/" rel="noreferrer">MNIST</a>. Here are papers that successfully applied neural networks on this data set:</p>
<p>Y. LeCun, L. Bottou, Y. Bengio and P. Haffner: Gradient-Based Learning Applied to Document Recognition, <a href="http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf" rel="noreferrer">http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf</a></p>
<p>Dan Claudiu Ciresan, Ueli Meier, Luca Maria Gambardella, Juergen Schmidhuber: Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition, <a href="http://arxiv.org/abs/1003.0358" rel="noreferrer">http://arxiv.org/abs/1003.0358</a></p>
<p>I trained an MLP with 784-200-50-10 architecture and got >96% accuracy on the test set.</p> | 2012-03-13 21:04:29.130000+00:00 | 2012-03-13 21:09:38.470000+00:00 | 2012-03-13 21:09:38.470000+00:00 | null | 9,684,204 | <p>Currently I'm learning about neural networks and I'm trying to create an application that can be trained to recognize handwritten characters.
For this problem I use a feed-forward neural network and it seems to work when I train it to recognize 1, 2 or 3 different characters. But when I try to make the network learn more than 3 characters it will stagnate at a error percentage around the 40 - 60%. </p>
<p>I tried with multiple layers and less/more neurons but I can't seem to get it right, now I'm wondering if a feedforward neural network is capable of recognizing that much information. </p>
<p>Some statistics:</p>
<p><strong>Network type:</strong> Feed-forward neural network</p>
<p><strong>Input neurons:</strong> 100 (a 10 * 10) grid is used to draw the characters</p>
<p><strong>Output neurons:</strong> The amount of characters to regocnize</p>
<p><em>Does anyone know what's the possible flaw in my architecture is? Are there too much input neurons? Is the feedforward neural network not capable of character regocnition?</em></p> | 2012-03-13 12:48:20.760000+00:00 | 2019-02-12 12:00:49.937000+00:00 | 2019-01-21 12:18:03.670000+00:00 | artificial-intelligence|neural-network|ocr|backpropagation|feed-forward | ['http://yann.lecun.com/exdb/mnist/', 'http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf', 'http://arxiv.org/abs/1003.0358'] | 3 |
50,584,818 | <p>Pretty much every network uses batch normalization, which is exactly that. Paper can be found here: (<a href="https://arxiv.org/abs/1502.03167" rel="nofollow noreferrer">https://arxiv.org/abs/1502.03167</a>). In essence it normalizes the values to be 0 mean and unit variance before being fed into the next layer. Another work is on self normalizing linear units (selu), which in some sense does this automatically without needing any kind of scaling. Paper can be found here: (<a href="https://arxiv.org/abs/1706.02515" rel="nofollow noreferrer">https://arxiv.org/abs/1706.02515</a>). </p> | 2018-05-29 12:47:32.517000+00:00 | 2018-05-30 07:40:00.550000+00:00 | 2018-05-30 07:40:00.550000+00:00 | null | 50,583,712 | <p>Would anyone here know if there is any kind of normalisation or scaling between layers in existing Neural Network arcitectures?</p>
<p>Scaling inputs is common and i am familiar with ReLU blow up. Most models i see indicate a small range of values like -2 to +2 but i don't see how this can be maintained from layer to layer. Irrespective of the activation function the second layer output is in the tens then the third layer is hundreds and final output is tens of thousands. In the worst case the layer returns NaN. A work around can be by scaling or alternating ReLU/sigmoid but I would like to know if this is this common?</p> | 2018-05-29 11:52:04.787000+00:00 | 2018-05-30 07:40:00.550000+00:00 | null | machine-learning|neural-network|deep-learning|conv-neural-network|perceptron | ['https://arxiv.org/abs/1502.03167', 'https://arxiv.org/abs/1706.02515'] | 2 |
44,813,517 | <p>I am currently working on a meta-analysis for my comprehensive exams and fit pretty much the same model you are talking about: I can have multiple effect sizes drawn from the same study. I would not fit a multilevel meta-analytic model using <code>metafor</code>, as it does not appropriately capture the correlated error terms. I wrote out my thinking in my paper (still working on it), so here is a rough explanation from my comps on how to address this situation:</p>
<blockquote>
<p>I gathered $k = 240$ effect sizes across $m = 90$ studies. Table 2
describes the distribution across studies: Half of the studies
reported more than one effect size, with three studies reporting as
many as 15 effect sizes. Traditional meta-analytic methodologies
assume that all effect sizes are independent of one another; this
assumption is severely violated in the present analysis, as effect
sizes drawn from the same participants are dependent on one another. </p>
<p>One would ideally use a multivariate meta-analysis to model these
dependencies; however, this requires the meta-analyst to have access
to the full covariance matrix of all measures in all studies. This is
not realistic in many settings [@jackson2011multivariate],
particularly in the present meta-analysis of a literature where (a)
researchers hardly publish this information and (b) the research has
been published over the course of 70 years, making acquiring this
information from the authors of many of theses studies impossible.<br>
Multilevel meta-analysis has been proposed as a way to deal with
unknown dependencies between effect sizes [@cheung2014modeling;
@konstantopoulos2011fixed; @van2013three]. While some argue that
individuals could be modeled at Level 1, effect sizes at Level 2, and
study at Level 3 [e.g., @cheung2014modeling], three-level
meta-analyses still assume that residual errors are orthogonal within
clusters [@tanner2016handling]. This assumption is violated when
multiple effect sizes are drawn from the same participants. </p>
<p>The current “state-of-the-art” [@polanin2017review] way to model these
dependencies and avoid underestimating standard errors is to use
robust variance estimates [RVE; @hedges2010robust;
@tanner2016handling]. I performed my meta-analysis using RVE for
correlated effects in the <code>robumeta</code> R package [@fisher2015robumeta]. </p>
<p>As mentioned above, I am able to calculate the variances of effect
sizes directly from sample size, but I am <em>not</em> able to calculate the
covariance between effect sizes. RVE solves this problem by using the
cross products of the residuals for each study to estimate the
variance-covariance matrix of effect sizes within a study. While the
estimate of the covariance matrix in each study is not very good, the
combined variance estimate converges to the true variance as the
number of studies approaches infinity [@hedges2010robust]. </p>
<p>Traditional meta-analyses weight effect sizes by using the inverse of
the variance. RVE weights each effect size using (a) the inverse of
the average variance across all effect sizes in a study (assuming a
constant correlation across effect sizes) (b) divided by the number of
effect sizes in the study. This ensures that a study does not get
"extra" weight simply by having more effect sizes. </p>
<p>This method is used primarily for the purposes of this meta-analysis:
interpreting meta-regression coefficients. The variance estimates
found in other meta-analyses (e.g., $Q, I^2, \tau^2$) are not precise
when using RVE. Given this shortcoming of RVE—and my main focus in
interpreting meta-regression coefficients—I will not focus on these
estimates.</p>
</blockquote>
<p>References (from my .bib file, sorry if the format is annoying):</p>
<pre><code>@article{jackson2011multivariate,
title={Multivariate meta-analysis: Potential and promise},
author={Jackson, Dan and Riley, Richard and White, Ian R},
journal={Statistics in Medicine},
volume={30},
number={20},
pages={2481--2498},
year={2011},
publisher={Wiley Online Library}
}
@article{cheung2014modeling,
title={Modeling dependent effect sizes with three-level meta-analyses: A structural equation modeling approach},
author={Cheung, Mike W L},
journal={Psychological Methods},
volume={19},
number={2},
pages={211--229},
year={2014}
}
@article{konstantopoulos2011fixed,
title={Fixed effects and variance components estimation in three-level meta-analysis},
author={Konstantopoulos, Spyros},
journal={Research Synthesis Methods},
volume={2},
number={1},
pages={61--76},
year={2011},
publisher={Wiley Online Library}
}
@article{van2013three,
title={Three-level meta-analysis of dependent effect sizes},
author={Van den Noortgate, Wim and L{\'o}pez-L{\'o}pez, Jos{\'e} Antonio and Mar{\'\i}n-Mart{\'\i}nez, Fulgencio and S{\'a}nchez-Meca, Julio},
journal={Behavior Research Methods},
volume={45},
number={2},
pages={576--594},
year={2013},
publisher={Springer}
}
@article{tanner2016handling,
title={Handling complex meta-analytic data structures using robust variance estimates: A tutorial in {R}},
author={Tanner-Smith, Emily E and Tipton, Elizabeth and Polanin, Joshua R},
journal={Journal of Developmental and Life-Course Criminology},
volume={2},
number={1},
pages={85--112},
year={2016},
publisher={Springer}
}
@article{polanin2017review,
title={A Review of Meta-Analysis Packages in {R}},
author={Polanin, Joshua R and Hennessy, Emily A and Tanner-Smith, Emily E},
journal={Journal of Educational and Behavioral Statistics},
volume={42},
number={2},
pages={206--242},
year={2017},
publisher={SAGE Publications Sage CA: Los Angeles, CA}
}
@article{hedges2010robust,
title={Robust variance estimation in meta-regression with dependent effect size estimates},
author={Hedges, Leon V and Tipton, Elizabeth and Johnson, Matthew C},
journal={Research synthesis methods},
volume={1},
number={1},
pages={39--65},
year={2010}
}
@article{fisher2015robumeta,
title={robumeta: An {R}-package for robust variance estimation in meta-analysis},
author={Fisher, Zachary and Tipton, Elizabeth},
journal={arXiv preprint arXiv:1503.02220},
year={2015}
}
</code></pre> | 2017-06-28 22:28:27.753000+00:00 | 2017-06-28 22:28:27.753000+00:00 | null | null | 44,811,867 | <p>Does this code look right for a multilevel meta-analysis in R using the metafor package?</p>
<p>I have effect sizes ("id") nested within articles ("citation") nested within data sets ("data"). To clarify, multiple effect sizes are often reported within the same published work; and different published works often use the same data.</p>
<pre><code>inf <- rma.mv(infcoef, infvar, random = ~ 1 | data/citation/id, data=dat)
</code></pre>
<p>I've looked at <a href="http://www.metafor-project.org/doku.php/analyses:konstantopoulos2011" rel="nofollow noreferrer">Konstantopoulos, 2011</a>, but I think I have an extra level of clustering so I want to make sure I've specified the model correctly.</p>
<p>Thanks!</p> | 2017-06-28 20:20:26.180000+00:00 | 2017-06-28 22:28:27.753000+00:00 | 2017-06-28 21:47:11.883000+00:00 | r|multi-level | [] | 0 |
65,984,606 | <p>There are a couple different ways to approach this problem. Based on the comments this sounds like a univariate/multi-step time series forecasting albeit across many different events.</p>
<p>First to clarify most deep learning for time series models/frameworks take data in the following format <code>(batch_size, n_historical_steps, n_feature_time_series)</code> and output the result in the format <code>(batch_size, n_forecasted_steps, n_targets)</code> .</p>
<p>Since this is a univariate forecasting problem <code>n_feature_time_series</code> would be one (unless I'm missing something). Now <code>n_historical_steps</code> is a hyper parameter we often optimize on as often the entire temporal history is not relevant to forecasting the next time n steps. You might want to try optimizing on that as well. However let say you choose to use the full temporal history then this would look like <code>(batch_size, 200, 1)</code>. Following this approach you might then have output shape of <code>(batch_size, 100, 1)</code>. You could then use a batch_size of 1000 to feed in all the different events at once (assuming of course you have a different validation/test set).This would give you an input shape of <code>(1000, 200, 1)</code> This is how you would likely do it for instance if you were going to use models like DA-RNN, LSTM, vanilla Transformer, etc.</p>
<p>There are some other models though that would create a learnable series embedding_id such as the <a href="https://arxiv.org/abs/1907.00235" rel="nofollow noreferrer">Convolutional Transformer Paper</a> or <a href="https://arxiv.org/abs/1704.04110" rel="nofollow noreferrer">Deep AR</a>. This is essentially a unique series identifier that would be associated with each event and the model would learn to forecast in the same pass on each.</p>
<p>I have models of both varieties implemented that you could use in <a href="https://github.com/AIStream-Peelout/flow-forecast" rel="nofollow noreferrer">Flow Forecast</a>. Though I don't have any detailed tutorials on this type of problem at the moment. I will also say also that in all honesty given that you only have 1000 BB events (each with only 300 univariate time steps) and the many variables in play at Basketball I doubt that you will be able to accomplish this task with any real degree of accuracy. I would guess you probably need at least 20k+ basketball event data to be able to forecast this type of problem well with deep learning at least.</p> | 2021-01-31 21:47:00.553000+00:00 | 2021-01-31 22:25:19.463000+00:00 | 2021-01-31 22:25:19.463000+00:00 | null | 65,972,853 | <p>I want to see if the following problem can be solved by using neural networks: I have a database containing over 1000 basketball events, where the total score has been recorded every second from minute 5 till minute 20, and where the basketball games are all from the same league. This means that the events are occurring on different time periods. The data is afterwards interpolated to have the exact time difference between two timesteps, and thus obtaining exactly 300 points between minute 5 and minute 20. This can be seen here:
<a href="https://i.stack.imgur.com/f6B62.png" rel="nofollow noreferrer">Time series</a>. The final goal is to have a model that can predict the y values between t=15 till t=20 and use as input data the y values between t=5 and t=15. I want to train the model by using the database containing the 1000 events. For this I tried using the following network:</p>
<p><a href="https://i.stack.imgur.com/yt2wH.png" rel="nofollow noreferrer">input data vs output data</a></p>
<p><a href="https://i.stack.imgur.com/d7g0Q.png" rel="nofollow noreferrer">Neural network</a></p>
<p>The input data, that will be used to train the neural network model would have the shape (1000,200) and the output data, would have the shape (1000,100).
Can someone maybe guide me in the right direction for this and maybe give some feedback if this is a correct approach for such a problem, I have found some previous time series problems, but all of them were based on one large time series, while in this situation I have 1000 different time series.</p> | 2021-01-30 20:32:50.243000+00:00 | 2021-01-31 22:25:19.463000+00:00 | 2021-01-31 20:56:15.450000+00:00 | machine-learning|neural-network|time-series|regression | ['https://arxiv.org/abs/1907.00235', 'https://arxiv.org/abs/1704.04110', 'https://github.com/AIStream-Peelout/flow-forecast'] | 3 |
61,493,173 | <p>I agree with your intuition on epochs. It is common to keep this value as low as possible in order to complete more training "experiments" in the same number of working hours. I don't have a great reference here, but I would welcome one in the comments.</p>
<p>For almost everything else, there is a paper by Leslie N. Smith that I can't recommend enough, <a href="https://arxiv.org/abs/1803.09820" rel="nofollow noreferrer">A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay</a>.</p>
<p>As you can see, batch size is included but epochs are not. You will also notice that the model architecture is not included (number of layers, layer size, etc). <a href="https://en.wikipedia.org/wiki/Neural_architecture_search" rel="nofollow noreferrer">Neural Architecture Search</a> is a huge research field in its own right, separate from hyper-parameter tuning.</p>
<hr>
<p>As for the loss function, I can't think of any reason to "tune" that except in the context of an Auxiliary Loss for training only, which I suspect is not what you are talking about.</p>
<p>The loss function that will be applied to your validation or test set is part of the problem statement. That, along with the data, defines the problem you are solving. You don't changing it by tuning, you change it by convincing a product manager that your alternative is better for the business need.</p> | 2020-04-29 02:23:25.133000+00:00 | 2020-04-29 02:56:46.320000+00:00 | 2020-04-29 02:56:46.320000+00:00 | null | 61,490,006 | <p>I'm fairly new to machine learning, and working on optimizing hyperparameters for my model. I'm doing this via a randomized search. My question is: <strong>should I be searching over # of epochs and batch size along with my other hyperparameters (e.g. loss function, number of layers, etc.)? If not, should I fix a these values first, find the other parameters, then return to tune these?</strong></p>
<p>My concern is a) that searching over many epochs will be extremely time-consuming, so leaving it at one low value for the initial scan would be useful and b) that these parameters, esp. # of epochs, will disproportionately affect the results when the model is behaving well, and won't really give me much information about the rest of my architecture, as there should be a regime where more epochs, up to a point, are better. I know this isn't totally accurate, i.e. # of epochs is a real hyperparameter and too many can lead to overfitting issues, for example. Currently, my model is not clearly improving with # of epochs, though it was suggested by someone working on a similar problem within my area of research that this may be mitigated by implementing batch normalization, which is another parameter I am testing. Finally, I am worried that batch size will be quite affected by the fact that I am scaling my data down to 60% to allow my code to run reasonably (and I think the final model will be trained on vastly more data than the simulated data currently available to me).</p> | 2020-04-28 21:06:54.997000+00:00 | 2020-04-29 02:56:46.320000+00:00 | null | machine-learning|neural-network|hyperparameters | ['https://arxiv.org/abs/1803.09820', 'https://en.wikipedia.org/wiki/Neural_architecture_search'] | 2 |
35,681,340 | <p>One option is to make a hierarchical classifier. So in a first step you use the frontal face classifier (assuming that most pictures are frontal faces). If the classifier fails, you try with the profile classifier. </p>
<p>I did that with a dataset of faces and it improved my overall classification accuracy. Furthermore, if you have some a priori information, you can use it. In my case the faces were usually in the middle up part of the picture.</p>
<p>To further improve your performance, without using the two classifiers in MATLAB you are using, you would need to change your technique (and probably your programming language). This is the best method so far: <a href="http://arxiv.org/abs/1503.03832" rel="nofollow">Facenet</a>. </p> | 2016-02-28 10:27:48.223000+00:00 | 2016-02-28 10:27:48.223000+00:00 | null | null | 35,672,196 | <p>I am trying to detect the faces using the Matlab built-in viola jones face detection. Is there anyway that I can combine two classification models like "FrontalFaceCART" and "ProfileFace" into one in order to get a better result?</p>
<p>Thank you.</p> | 2016-02-27 16:23:28.627000+00:00 | 2016-03-14 08:56:54.533000+00:00 | 2016-03-14 08:56:54.533000+00:00 | matlab|computer-vision|classification|face-detection|matlab-cvst | ['http://arxiv.org/abs/1503.03832'] | 1 |
47,370,934 | <p>There are many related answers to this on the broader question of how <a href="http://probabilistic-programming.org" rel="noreferrer">probabilistic programming</a> benefits from <a href="https://arxiv.org/abs/1701.03757" rel="noreferrer">deep probabilistic programming</a> systems.</p>
<p>I can give one pointed answer for Latent Dirichlet Allocation (LDA) in TensorFlow. A key benefit is from recognizing that LDA is just a model. Given this model, and a dataset represented as a document-by-term matrix (e.g., via <a href="https://www.tensorflow.org/api_docs/python/tf/SparseTensor" rel="noreferrer">tf.SparseTensor</a>), TensorFlow lets you not only perform scalable inference but very flexible inference. Specific ops to use in TF depends on the specific algorithm. You can write a Gibbs sampler or coordinate ascent variational inference algorithm—both highly efficient for LDA (usable with manual <code>tf.assign</code> ops on trainable variables). CAVI is computationally and memory-efficient, <a href="https://arxiv.org/abs/1206.7051" rel="noreferrer">scaling to millions of documents</a> and reifiable with efficient data pipelines such as <a href="https://www.tensorflow.org/api_docs/python/tf/data" rel="noreferrer">tf.data</a>.</p>
<p>With TensorFlow, you can also use generic methods such as black box variational inference, which are extremely versatile and do not require manual <code>tf.assign</code> ops. Once you've written it to work well on your problem, you can extend LDA in many ways such as with nonconjugate priors, hierarchical priors, and deep network parameterizations (possible with <a href="https://www.tensorflow.org/api_docs/python/tf/layers" rel="noreferrer">tf.layers</a>). Generic methods require tools such as TensorFlow optimizers and TensorFlow's automatic differentiation for gradient-based optimization. These are not available in Python unless you expoit tracing tools such as <a href="https://github.com/HIPS/autograd/" rel="noreferrer">autograd</a>.</p> | 2017-11-18 20:53:40.473000+00:00 | 2017-11-19 00:35:11.643000+00:00 | 2017-11-19 00:35:11.643000+00:00 | null | 37,903,444 | <p>I wanted to implement LDA with tensorflow as a practice, and I think the tensorflow version may have the advantages below:</p>
<ul>
<li>Fast. If I can use the built-in ops to express the sampling process.</li>
<li>Easy to parallelize. Many ops have been implemented with optimizations for parallelization, so this lda should be easy to run on gpus or distributed clusters.</li>
<li>Shorter and cleaner code. Like many other models, especially NNs, building such models with tensorflow involves less code.</li>
</ul>
<p>While after I inspected some python implementations of lda(for example, <a href="https://github.com/ariddell/lda/" rel="nofollow">https://github.com/ariddell/lda/</a>), I have no idea what ops of tensorflow can be used, what kind of graph should be built and what optimizer should I choose. Because the process of the gibbs sampling looks like all about element-wise updating of the doc-topics, the topic-words matrices and the topic counting table. So what can tensorflow do to simplify and optimze this process?</p>
<p>And can I treat the likelihood of the generated doc to the real input doc as the optimization target and utilize a gradient boost optimizer to minimize the negative of the likelihood, thus get alpha, beta and doc-topics distributions? Because if this is tractable, tensorflow definitely can be used here.</p> | 2016-06-19 02:33:08.623000+00:00 | 2017-12-05 13:09:59.387000+00:00 | null | tensorflow|lda | ['http://probabilistic-programming.org', 'https://arxiv.org/abs/1701.03757', 'https://www.tensorflow.org/api_docs/python/tf/SparseTensor', 'https://arxiv.org/abs/1206.7051', 'https://www.tensorflow.org/api_docs/python/tf/data', 'https://www.tensorflow.org/api_docs/python/tf/layers', 'https://github.com/HIPS/autograd/'] | 7 |
26,948,927 | <p><code><random></code> has some decent parts, and the generators it contains are at least servicable for many purposes. However, the library and its interfaces are very far from mature. Hence you need to build your own header/library to supply the missing parts, or roll out big guns like boost or the code from Numeric Recipes.</p>
<p>One quick and easy way of obtaining uniform integer derivates is to multiply uniform floats in the range [0,1) with the modulus and truncating. That spreads the bias all over the range and it is good enough for many off-the-cuff uses. </p>
<p>By contrast, the standard method of taking the remainder of an integer derivate modulo the range collects the bias at the beginning of the range. E.g. the famous <code>rand() % modulus</code>. </p>
<p>Case in point: if your modulus happens to be 2/3 of the derivate's natural modulus (e.g. 0xAAAAAAAAu for 2^32) then all results in the first half the result range are exactly twice as likely as those in the upper half of the result range. <strong>Not</strong> recommended for quality code.</p>
<p>To get an unbiassed integer derivate, use the rejection method. Here is one example that uses a full-size random integers as a basis. You can template it on word size and generator, stuff it in your 'fix-the-std' header and be done for all time:</p>
<pre><code>uint64_t random_uint64 ();
uint64_t random_uint64 (uint64_t modulus)
{
if (modulus)
{
for ( ; ; )
{
uint64_t raw_bits = random_uint64();
uint64_t result = raw_bits % modulus;
uint64_t check = uint64_t(raw_bits - result + modulus);
if (check >= raw_bits || check == 0)
{
return result;
}
}
}
return 0;
}
</code></pre>
<p><code>std::uniform_int_distribution<></code> does something very similar internally... but there the logic is well protected against industrial esponiage by the usual hundreds of lines of fluff, and the awkward interface ensures that people cannot simply use that functionality just because they feel like it.</p>
<p>Just for completeness, here's a simple and fast generator of excellent, proven quality (Sebastiano Vigna's <a href="http://arxiv.org/pdf/1404.0390.pdf" rel="nofollow">xorshift64*</a>) that makes a nice all-round generator when the extremely long period of a big gun like <a href="http://xorshift.di.unimi.it/" rel="nofollow">xorshift1024*</a> is not needed:</p>
<pre><code>uint64_t random_seed64 = 42;
uint64_t random_uint64 ()
{
uint64_t x = random_seed64;
x ^= x >> 12; x ^= x << 25; x ^= x >> 27;
random_seed64 = x;
return x * 2685821657736338717ull;
}
</code></pre>
<p>The generators included in the standard all have their peculiarities and problems, you have to know their strengths and weaknesses in order to make a good choice. If you're not aiming for a PhD in random number generation and computational statistics then you might be better off using tried and trusted code that is of proven quality.</p> | 2014-11-15 17:58:18.507000+00:00 | 2014-11-15 18:05:01.303000+00:00 | 2014-11-15 18:05:01.303000+00:00 | null | 26,947,324 | <p>I have code similar to the following:</p>
<pre><code> vector<int> vec;
// stuff vector here
random_device rd;
minstd_rand generator(rd());
uniform_int_distribution<unsigned> dist(0 , vec.size() - 1);
while (vec.size() > 0)
{
auto it = vec.begin() + dist(generator);
// use *it for something
swap(*it, *(vec.end() - 1));
vec.pop_back();
}
</code></pre>
<p>I know I can construct/destruct a local distribution inside the loop. But I'd rather just adjust the bounds of <code>dist</code> inside the loop. Can I do this?</p> | 2014-11-15 15:18:00.077000+00:00 | 2014-11-15 18:47:09.227000+00:00 | null | c++|c++11|random | ['http://arxiv.org/pdf/1404.0390.pdf', 'http://xorshift.di.unimi.it/'] | 2 |
58,913,401 | <h2>Update</h2>
<p>According to the new edit in the question, you need a way to identify new people on the fly whose photos might not have been available during the training phase of the model. These tasks are called <b>few shot learning</b>. This is similar to the requirements of the intelligence/police agencies to find their targets using CCTV camera footage. As usually there are not enough images of a specific target, during training, they use models such as <a href="http://arxiv.org/abs/1503.03832" rel="nofollow noreferrer">FaceNet</a>. I really suggest reading the paper, however, I explain a few of its highlights here:</p>
<ul>
<li>Generally, the last layer of a classifier is a n*1 vector with n-1 of
the elements almost equal to zero, and one close to 1. The element close to 1, determines the prediction of the classifier about the input's label. <a href="https://i.stack.imgur.com/b9QyF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b9QyF.png" alt="Typical CNN architecture" /></a></li>
<li>The authors figured out that if they train a
classifier network with a specific loss function on a huge dataset of faces, you can use the semi-final layer output as a representation of any face, irrespective of it being in the training set or not, the authors call this vector <b>Face Embedding</b>.</li>
<li>The previous result means that with a very well trained FaceNet model, you can summarise any face into a vector. The very interesting attribute of this approach is that the vectors of a specific person's face in different angles/positions/states have are proximate in the euclidian space (this property is enforced by the loss function that the authors chose).<a href="https://i.stack.imgur.com/3T3pj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3T3pj.png" alt="enter image description here" /></a></li>
<li>In summary, you have a model that gets faces as input and returns vectors. The vectors close to each other are very likely to belong to the same person (For checking that you can use KNN or just simple euclidian distance).</li>
</ul>
<p>One implementation of FaceNet can be found <a href="https://github.com/davidsandberg/facenet" rel="nofollow noreferrer">here</a>. I suggest you try to run it on your computer to get to know what you are actually dealing with. After that, it might be best to do the following:</p>
<ol>
<li>Transform the FaceNet model mentioned in the repository to its
tflite version (<a href="https://medium.com/analytics-vidhya/facenet-on-mobile-cb6aebe38505" rel="nofollow noreferrer">this</a> blogpost might help)</li>
<li>For each photo submitted by the user, use Face API to extract the face(s)</li>
<li>Use the minified model in your app to get the face embeddings of the extracted face.</li>
<li>Process all the images in the gallery of the user, getting the vectors for the faces in the photos.</li>
<li>Then compare each vector found in step4 with each vector found in step3 to get the matches.</li>
</ol>
<h1>Original Answer</h1>
<p>You came across one of the most prevalent challenges of machine learning: Overfitting. Face detection and recognition is a huge area of research on its own and almost all the reasonably accurate models are using some kind of deep learning. Note that even detecting a face accurately is not as easy as it seems, however, as you are doing it on android, you can use <a href="https://developers.google.com/android/reference/com/google/android/gms/vision/face/Face" rel="nofollow noreferrer">Face API</a> for this task. (Other more advanced techniques such as <a href="https://github.com/ipazc/mtcnn" rel="nofollow noreferrer">MTCNN</a> are too slow/difficult to deploy on a handset). It has been shown that just feeding the model with a face photo with a lot of background noise or multiple people inside does not work. So, you really cannot skip this step.</p>
<p>After getting a nice trimmed face of the candidate targets from the background, you need to overcome the challenge of recognising the detected faces. Again, all the competent models to the best of my knowledge, are using some sort of deep learning/convolutional neural networks. Using them on a mobile phone is a challenge, but thanks to <a href="https://www.tensorflow.org/lite" rel="nofollow noreferrer">Tensorflow Lite</a> you can minify them and run them within your app. A project about face recognition on android phones that I had worked on is <a href="https://github.com/farzadz/Australian-Politician-Face-Recognition" rel="nofollow noreferrer">here</a> that you can check.
Keep in mind that any good model should be trained on numerous instances of labelled data, however there are a plethora of models already trained on large datasets of faces or other image recognition tasks, to tweak them and use their existing knowledge, we can employ <strong>transfer learning</strong>, for a quick start on object detection and transfer learning that is closely related to your case check <a href="https://medium.com/tensorflow/training-and-serving-a-realtime-mobile-object-detector-in-30-minutes-with-cloud-tpus-b78971cf1193" rel="nofollow noreferrer">this</a> blog post.</p>
<p>Overall, you have to get numerous instances of the faces that you want to detect plus numerous face pics of people that you don't care about, then you need to train a model based on the above-mentioned resources, and then you need to use TensorFlow lite to decrease its size and embed it within your app. For each frame then, you call android Face API and feed (the probably detected face) into the model and identify the person.</p>
<p>Depending on your level of tolerance for delay and the number of training set size and number of targets, you can get various results, however, %90+ accuracy is easily achievable if you have only a few target people.</p> | 2019-11-18 11:18:33.073000+00:00 | 2019-11-20 09:18:03.873000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 58,852,046 | <p>In my app I'm trying to do face recognition on a specific image using Open CV, here first I'm training one image and then after training that image if I run face recognition on that image it successfully recognizes that trained face. However, when I turn to another picture of the same person recognition does not work. It just works on the trained image, so my question is how do I rectify it?</p>
<p>Update:
What i want to do is that user should select image of a person from storage and then after training that selected image i want to fetch all images from storage which matches face of my trained image</p>
<p>Here is my activity class:</p>
<pre><code>public class MainActivity extends AppCompatActivity {
private Mat rgba,gray;
private CascadeClassifier classifier;
private MatOfRect faces;
private ArrayList<Mat> images;
private ArrayList<String> imagesLabels;
private Storage local;
ImageView mimage;
Button prev,next;
ArrayList<Integer> imgs;
private int label[] = new int[1];
private double predict[] = new double[1];
Integer pos = 0;
private String[] uniqueLabels;
FaceRecognizer recognize;
private boolean trainfaces() {
if(images.isEmpty())
return false;
List<Mat> imagesMatrix = new ArrayList<>();
for (int i = 0; i < images.size(); i++)
imagesMatrix.add(images.get(i));
Set<String> uniqueLabelsSet = new HashSet<>(imagesLabels); // Get all unique labels
uniqueLabels = uniqueLabelsSet.toArray(new String[uniqueLabelsSet.size()]); // Convert to String array, so we can read the values from the indices
int[] classesNumbers = new int[uniqueLabels.length];
for (int i = 0; i < classesNumbers.length; i++)
classesNumbers[i] = i + 1; // Create incrementing list for each unique label starting at 1
int[] classes = new int[imagesLabels.size()];
for (int i = 0; i < imagesLabels.size(); i++) {
String label = imagesLabels.get(i);
for (int j = 0; j < uniqueLabels.length; j++) {
if (label.equals(uniqueLabels[j])) {
classes[i] = classesNumbers[j]; // Insert corresponding number
break;
}
}
}
Mat vectorClasses = new Mat(classes.length, 1, CvType.CV_32SC1); // CV_32S == int
vectorClasses.put(0, 0, classes); // Copy int array into a vector
recognize = LBPHFaceRecognizer.create(3,8,8,8,200);
recognize.train(imagesMatrix, vectorClasses);
if(SaveImage())
return true;
return false;
}
public void cropedImages(Mat mat) {
Rect rect_Crop=null;
for(Rect face: faces.toArray()) {
rect_Crop = new Rect(face.x, face.y, face.width, face.height);
}
Mat croped = new Mat(mat, rect_Crop);
images.add(croped);
}
public boolean SaveImage() {
File path = new File(Environment.getExternalStorageDirectory(), "TrainedData");
path.mkdirs();
String filename = "lbph_trained_data.xml";
File file = new File(path, filename);
recognize.save(file.toString());
if(file.exists())
return true;
return false;
}
private BaseLoaderCallback callbackLoader = new BaseLoaderCallback(this) {
@Override
public void onManagerConnected(int status) {
switch(status) {
case BaseLoaderCallback.SUCCESS:
faces = new MatOfRect();
//reset
images = new ArrayList<Mat>();
imagesLabels = new ArrayList<String>();
local.putListMat("images", images);
local.putListString("imagesLabels", imagesLabels);
images = local.getListMat("images");
imagesLabels = local.getListString("imagesLabels");
break;
default:
super.onManagerConnected(status);
break;
}
}
};
@Override
protected void onResume() {
super.onResume();
if(OpenCVLoader.initDebug()) {
Log.i("hmm", "System Library Loaded Successfully");
callbackLoader.onManagerConnected(BaseLoaderCallback.SUCCESS);
} else {
Log.i("hmm", "Unable To Load System Library");
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION, this, callbackLoader);
}
}
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
prev = findViewById(R.id.btprev);
next = findViewById(R.id.btnext);
mimage = findViewById(R.id.mimage);
local = new Storage(this);
imgs = new ArrayList();
imgs.add(R.drawable.jonc);
imgs.add(R.drawable.jonc2);
imgs.add(R.drawable.randy1);
imgs.add(R.drawable.randy2);
imgs.add(R.drawable.imgone);
imgs.add(R.drawable.imagetwo);
mimage.setBackgroundResource(imgs.get(pos));
prev.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
if(pos!=0){
pos--;
mimage.setBackgroundResource(imgs.get(pos));
}
}
});
next.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
if(pos<5){
pos++;
mimage.setBackgroundResource(imgs.get(pos));
}
}
});
Button train = (Button)findViewById(R.id.btn_train);
train.setOnClickListener(new View.OnClickListener() {
@RequiresApi(api = Build.VERSION_CODES.KITKAT)
@Override
public void onClick(View view) {
rgba = new Mat();
gray = new Mat();
Mat mGrayTmp = new Mat();
Mat mRgbaTmp = new Mat();
classifier = FileUtils.loadXMLS(MainActivity.this);
Bitmap icon = BitmapFactory.decodeResource(getResources(),
imgs.get(pos));
Bitmap bmp32 = icon.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(bmp32, mGrayTmp);
Utils.bitmapToMat(bmp32, mRgbaTmp);
Imgproc.cvtColor(mGrayTmp, mGrayTmp, Imgproc.COLOR_BGR2GRAY);
Imgproc.cvtColor(mRgbaTmp, mRgbaTmp, Imgproc.COLOR_BGRA2RGBA);
/*Core.transpose(mGrayTmp, mGrayTmp); // Rotate image
Core.flip(mGrayTmp, mGrayTmp, -1); // Flip along both*/
gray = mGrayTmp;
rgba = mRgbaTmp;
Imgproc.resize(gray, gray, new Size(200,200.0f/ ((float)gray.width()/ (float)gray.height())));
if(gray.total() == 0)
Toast.makeText(getApplicationContext(), "Can't Detect Faces", Toast.LENGTH_SHORT).show();
classifier.detectMultiScale(gray,faces,1.1,3,0|CASCADE_SCALE_IMAGE, new Size(30,30));
if(!faces.empty()) {
if(faces.toArray().length > 1)
Toast.makeText(getApplicationContext(), "Mutliple Faces Are not allowed", Toast.LENGTH_SHORT).show();
else {
if(gray.total() == 0) {
Log.i("hmm", "Empty gray image");
return;
}
cropedImages(gray);
imagesLabels.add("Baby");
Toast.makeText(getApplicationContext(), "Picture Set As Baby", Toast.LENGTH_LONG).show();
if (images != null && imagesLabels != null) {
local.putListMat("images", images);
local.putListString("imagesLabels", imagesLabels);
Log.i("hmm", "Images have been saved");
if(trainfaces()) {
images.clear();
imagesLabels.clear();
}
}
}
}else {
/* Bitmap bmp = null;
Mat tmp = new Mat(250, 250, CvType.CV_8U, new Scalar(4));
try {
//Imgproc.cvtColor(seedsImage, tmp, Imgproc.COLOR_RGB2BGRA);
Imgproc.cvtColor(gray, tmp, Imgproc.COLOR_GRAY2RGBA, 4);
bmp = Bitmap.createBitmap(tmp.cols(), tmp.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(tmp, bmp);
} catch (CvException e) {
Log.d("Exception", e.getMessage());
}*/
/* mimage.setImageBitmap(bmp);*/
Toast.makeText(getApplicationContext(), "Unknown Face", Toast.LENGTH_SHORT).show();
}
}
});
Button recognize = (Button)findViewById(R.id.btn_recognize);
recognize.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View view) {
if(loadData())
Log.i("hmm", "Trained data loaded successfully");
rgba = new Mat();
gray = new Mat();
faces = new MatOfRect();
Mat mGrayTmp = new Mat();
Mat mRgbaTmp = new Mat();
classifier = FileUtils.loadXMLS(MainActivity.this);
Bitmap icon = BitmapFactory.decodeResource(getResources(),
imgs.get(pos));
Bitmap bmp32 = icon.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(bmp32, mGrayTmp);
Utils.bitmapToMat(bmp32, mRgbaTmp);
Imgproc.cvtColor(mGrayTmp, mGrayTmp, Imgproc.COLOR_BGR2GRAY);
Imgproc.cvtColor(mRgbaTmp, mRgbaTmp, Imgproc.COLOR_BGRA2RGBA);
/*Core.transpose(mGrayTmp, mGrayTmp); // Rotate image
Core.flip(mGrayTmp, mGrayTmp, -1); // Flip along both*/
gray = mGrayTmp;
rgba = mRgbaTmp;
Imgproc.resize(gray, gray, new Size(200,200.0f/ ((float)gray.width()/ (float)gray.height())));
if(gray.total() == 0)
Toast.makeText(getApplicationContext(), "Can't Detect Faces", Toast.LENGTH_SHORT).show();
classifier.detectMultiScale(gray,faces,1.1,3,0|CASCADE_SCALE_IMAGE, new Size(30,30));
if(!faces.empty()) {
if(faces.toArray().length > 1)
Toast.makeText(getApplicationContext(), "Mutliple Faces Are not allowed", Toast.LENGTH_SHORT).show();
else {
if(gray.total() == 0) {
Log.i("hmm", "Empty gray image");
return;
}
recognizeImage(gray);
}
}else {
Toast.makeText(getApplicationContext(), "Unknown Face", Toast.LENGTH_SHORT).show();
}
}
});
}
private void recognizeImage(Mat mat) {
Rect rect_Crop=null;
for(Rect face: faces.toArray()) {
rect_Crop = new Rect(face.x, face.y, face.width, face.height);
}
Mat croped = new Mat(mat, rect_Crop);
recognize.predict(croped, label, predict);
int indice = (int)predict[0];
Log.i("hmmcheck:",String.valueOf(label[0])+" : "+String.valueOf(indice));
if(label[0] != -1 && indice < 125)
Toast.makeText(getApplicationContext(), "Welcome "+uniqueLabels[label[0]-1]+"", Toast.LENGTH_SHORT).show();
else
Toast.makeText(getApplicationContext(), "You're not the right person", Toast.LENGTH_SHORT).show();
}
private boolean loadData() {
String filename = FileUtils.loadTrained();
if(filename.isEmpty())
return false;
else
{
recognize.read(filename);
return true;
}
}
}
</code></pre>
<p>My File Utils Class:</p>
<pre><code> public class FileUtils {
private static String TAG = FileUtils.class.getSimpleName();
private static boolean loadFile(Context context, String cascadeName) {
InputStream inp = null;
OutputStream out = null;
boolean completed = false;
try {
inp = context.getResources().getAssets().open(cascadeName);
File outFile = new File(context.getCacheDir(), cascadeName);
out = new FileOutputStream(outFile);
byte[] buffer = new byte[4096];
int bytesread;
while((bytesread = inp.read(buffer)) != -1) {
out.write(buffer, 0, bytesread);
}
completed = true;
inp.close();
out.flush();
out.close();
} catch (IOException e) {
Log.i(TAG, "Unable to load cascade file" + e);
}
return completed;
}
public static CascadeClassifier loadXMLS(Activity activity) {
InputStream is = activity.getResources().openRawResource(R.raw.lbpcascade_frontalface);
File cascadeDir = activity.getDir("cascade", Context.MODE_PRIVATE);
File mCascadeFile = new File(cascadeDir, "lbpcascade_frontalface_improved.xml");
FileOutputStream os = null;
try {
os = new FileOutputStream(mCascadeFile);
byte[] buffer = new byte[4096];
int bytesRead;
while ((bytesRead = is.read(buffer)) != -1) {
os.write(buffer, 0, bytesRead);
}
is.close();
os.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
return new CascadeClassifier(mCascadeFile.getAbsolutePath());
}
public static String loadTrained() {
File file = new File(Environment.getExternalStorageDirectory(), "TrainedData/lbph_trained_data.xml");
return file.toString();
}
}
</code></pre>
<p>These are the images i'm trying to compare here face of person is same still in recognition it's not matching!
<a href="https://i.stack.imgur.com/FnFJQ.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/FnFJQ.jpg" alt="Image 1"></a>
<a href="https://i.stack.imgur.com/f5xHy.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/f5xHy.jpg" alt="Image 2"></a></p> | 2019-11-14 08:08:30.867000+00:00 | 2019-11-28 04:12:48.887000+00:00 | 2019-11-28 04:12:48.887000+00:00 | java|android|opencv|face-recognition | ['http://arxiv.org/abs/1503.03832', 'https://i.stack.imgur.com/b9QyF.png', 'https://i.stack.imgur.com/3T3pj.png', 'https://github.com/davidsandberg/facenet', 'https://medium.com/analytics-vidhya/facenet-on-mobile-cb6aebe38505', 'https://developers.google.com/android/reference/com/google/android/gms/vision/face/Face', 'https://github.com/ipazc/mtcnn', 'https://www.tensorflow.org/lite', 'https://github.com/farzadz/Australian-Politician-Face-Recognition', 'https://medium.com/tensorflow/training-and-serving-a-realtime-mobile-object-detector-in-30-minutes-with-cloud-tpus-b78971cf1193'] | 10 |
61,790,598 | <p>I think this is the questions (and answers) you are looking for are below:</p>
<ul>
<li><a href="https://ai.stackexchange.com/questions/2008/how-can-neural-networks-deal-with-varying-input-sizes">https://ai.stackexchange.com/questions/2008/how-can-neural-networks-deal-with-varying-input-sizes</a></li>
<li><a href="https://stackoverflow.com/questions/1766461/how-are-neural-networks-used-when-the-number-of-inputs-could-be-variable?rq=1">How are neural networks used when the number of inputs could be variable?</a></li>
</ul>
<p>But in your case, at the end you could get rid of the problem by simply using some image embedding. You could take some neural net that is pre-trained for example, <a href="https://arxiv.org/abs/1512.00567" rel="nofollow noreferrer">Inception v3</a> that extracts <code>N</code> features from image, and you will always have constant input, of <code>N</code> features. </p> | 2020-05-14 06:20:27.737000+00:00 | 2020-05-14 06:20:27.737000+00:00 | null | null | 61,790,496 | <p>I am implementing a MultiLayer Perceptron , I am extracting the features of images using SIFT algorithm of image processing and I pass those features to the neural network , the features of images that I am considering are descriptors ,every image has different length of descriptors , some image has 200 descriptors and some image has 240 descriptors , means it's varying . But neural networks accepts fixed size of input data .
How can I pass this type of input to it if it accept varied input then how ?</p> | 2020-05-14 06:11:13.003000+00:00 | 2020-05-14 07:03:58.970000+00:00 | 2020-05-14 07:03:58.970000+00:00 | python|machine-learning|image-processing|deep-learning|computer-vision | ['https://ai.stackexchange.com/questions/2008/how-can-neural-networks-deal-with-varying-input-sizes', 'https://stackoverflow.com/questions/1766461/how-are-neural-networks-used-when-the-number-of-inputs-could-be-variable?rq=1', 'https://arxiv.org/abs/1512.00567'] | 3 |
71,616,059 | <p><code>Simple Copy Paste</code> is a strong method for data augmentation for instance segmentation related tasks.</p>
<p>Check about the research paper <a href="https://arxiv.org/abs/2012.07177" rel="nofollow noreferrer">here</a>.</p>
<p>For unofficial github code, check <a href="https://github.com/conradry/copy-paste-aug" rel="nofollow noreferrer">here</a></p>
<p><code>Albumentation</code> and <code>TorMentor</code> are also useful libraries for data augmentation.</p> | 2022-03-25 11:16:52.183000+00:00 | 2022-04-13 05:51:17.257000+00:00 | 2022-04-13 05:51:17.257000+00:00 | null | 65,812,100 | <p>Background : I am using YOLACT instance segmentation model to train set of images. The dataset size is very small (~20 images). The model doesn't converge properly (of-course given the dataset size). I wanted to increase the dataset size by adding some augmented images. I know we have various image augmentation techniques and packages like imgaug , albumentation, opencv etc. but I need image & annotation file ( COCO JSON ) format to train the model.</p>
<p>So my question is :</p>
<p>Is there a package that helps me to automatically generate the annotations of augmented images ?</p>
<p>or</p>
<p>Is there a better way to address my issue ?</p>
<p>Thank you in advance for your help!</p> | 2021-01-20 15:02:22.303000+00:00 | 2022-04-13 05:51:17.257000+00:00 | null | deep-learning|computer-vision|pytorch|artificial-intelligence|image-segmentation | ['https://arxiv.org/abs/2012.07177', 'https://github.com/conradry/copy-paste-aug'] | 2 |
57,750,395 | <p>try the below reference:</p>
<p>1) A Robust Ensemble Approach to Learn From Positive and Unlabeled Data Using SVM Base Models <a href="http://arxiv.org/abs/1402.3144" rel="nofollow noreferrer">http://arxiv.org/abs/1402.3144</a> (published in Neurocomputing)</p>
<p>2) Assessing binary classifiers using only positive and unlabeled data: <a href="http://arxiv.org/abs/1504.06837" rel="nofollow noreferrer">http://arxiv.org/abs/1504.06837</a></p> | 2019-09-02 00:33:57.957000+00:00 | 2019-09-02 00:33:57.957000+00:00 | null | null | 54,058,064 | <p>I have a data with 20 different types (as a column), 10 out of 20 are useful information, I wanted to classify them into 10 different class using logistic regression, as a result I wanted to show the number of records in each class. Data is not labeled.</p>
<pre><code>183820,9.17101300730551E+018,9,7,79,169,2017,10,17,6,3,0,1,1,0,0,0,0,0,0,637126.9861,5399201
183821,9.17101300712351E+018,9,7,72,147,2017,10,8,6,3,6,2,0,1,1,0,0,0,0,639046.3051,5363761.
</code></pre> | 2019-01-06 02:11:30.113000+00:00 | 2019-09-02 00:33:57.957000+00:00 | 2019-01-06 08:02:41.140000+00:00 | python|machine-learning|classification|regression|logistic-regression | ['http://arxiv.org/abs/1402.3144', 'http://arxiv.org/abs/1504.06837'] | 2 |
23,116,281 | <p>There is a paper <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.3166" rel="nofollow">How to Approximate the Inner-product: Fast Dynamic Algorithms for Euclidean Similarity</a> describing how to perform a fast approximation of the inner product. If this is not good or fast enough, I suggest to build an index containing all your documents. A structure similar to a <a href="http://en.wikipedia.org/wiki/Quadtree" rel="nofollow">quadtree</a> but based on a <a href="http://en.wikipedia.org/wiki/Geodesic_grid" rel="nofollow">geodesic grid</a> would probably work really well, see <a href="http://arxiv.org/abs/cs/0701164" rel="nofollow">Indexing the Sphere with the Hierarchical Triangular Mesh</a>.</p>
<p>UPDATE: I completely forgot that you are dealing with 100 dimensions. Indexing high dimensional data is <a href="http://en.wikipedia.org/wiki/Curse_of_dimensionality" rel="nofollow">notoriously hard</a> and I am not sure how well indexing a sphere will generalize to 100 dimensions.</p> | 2014-04-16 17:32:25.723000+00:00 | 2014-04-16 17:48:03.303000+00:00 | 2014-04-16 17:48:03.303000+00:00 | null | 23,115,801 | <p>I have a set of 30 000 documents represented by vectors of floats. All vectors have 100 elements. I can find similarity of two documents by comparing them using cosine measure between their vectors. The problem is that it takes to much time to find the most similar documents. Is there any algorithm which can help me with speeding up this?</p>
<p><strong>EDIT</strong></p>
<p>Now, my code just counts cosine similarity between first and all others vectors. It takes about 3 sec. I would like to speed it up ;) algorithm doesn't have to be accurate but should give similar results to full search.</p>
<p>Sum of elements of each vector is equal 1.</p>
<pre><code>start = time.time()
first = allVectors[0]
for vec in allVectors[1:]:
cosine_measure(vec[1:], first[1:])
print str(time.time() - start)
</code></pre> | 2014-04-16 17:04:38.840000+00:00 | 2014-04-16 20:47:55.430000+00:00 | 2014-04-16 17:21:21.977000+00:00 | algorithm|optimization|cosine-similarity | ['http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.3166', 'http://en.wikipedia.org/wiki/Quadtree', 'http://en.wikipedia.org/wiki/Geodesic_grid', 'http://arxiv.org/abs/cs/0701164', 'http://en.wikipedia.org/wiki/Curse_of_dimensionality'] | 5 |
72,718,976 | <p>There does not seem to be a client-facing feature allowing you to fine-tune Copilot directly.</p>
<p>Here are two illustration as to why this feature is, for now (Q2 2022) missing.</p>
<p>The <a href="https://github.com/features/copilot" rel="nofollow noreferrer">Copilot feature page</a> initially included this:</p>
<blockquote>
<h2>How will GitHub Copilot get better over time?</h2>
<p>GitHub Copilot doesn’t actually test the code it suggests, so the code may not even compile or run. GitHub Copilot can only hold a very limited context, so even single source files longer than a few hundred lines are clipped and only the immediately preceding context is used. And GitHub Copilot may suggest old or deprecated uses of libraries and languages. You can use the code anywhere, but you do so at your own risk.</p>
</blockquote>
<p>As <a href="https://twitter.com/tomekkorbak" rel="nofollow noreferrer">Tomek Korbak</a> explains <a href="https://twitter.com/tomekkorbak/status/1410554250514636805" rel="nofollow noreferrer">on Twitter</a>:</p>
<blockquote>
<p>Actually, Copilot's completions will always be optimised for human's liking, not necessarily compiler's liking.</p>
<p>That's because the language model training objective (predicting the next token in text) is great at capturing short-term dependencies (which explains the human feel of generated snippets).</p>
<p>But it struggles to capture long-term, global, semantic properties of generated sequences such as compilability. And there's no easy way of including compilability as a signal for their training.</p>
<p>The standard way -- fine-tuning language models using RL with compilability as a reward -- notoriously leads to catastrophic forgetting: less diverse and less accurate completions.</p>
</blockquote>
<p>Tomek references "<a href="https://arxiv.org/pdf/2106.04985.pdf" rel="nofollow noreferrer">Energy-Based Models for Code Generation under Compilability Constraints (pdf)</a>"</p>
<blockquote>
<p><a href="https://i.stack.imgur.com/ulfPr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ulfPr.png" alt="https://pbs.twimg.com/media/E5NHqGjXIAYRtwa?format=png&name=small" /></a></p>
<p>Our solution (KL-DPG) boosts compilability rate of generated sequences from 55% to 70%.<br />
RL fine-tuning can do better but at a cost of catastrophic forgetting.</p>
<p>Overall, energy-based models (EBMs) turn out to be great at expressing weird, sequence-level constraints that would be super hard as to express as normalised priors for autoregressive language models.</p>
<p>EBMs provide a way of injecting our structured, symbolic knowledge into large language models without breaking them down or sacrificing their uncanny abilities.<br />
The space of further applications in controllable generation is huge.</p>
</blockquote>
<p>So not so easy.</p>
<p><a href="https://tmabraham.github.io/" rel="nofollow noreferrer">Tanishq Mathew Abraham</a> explains in "<a href="https://tmabraham.github.io/blog/github_copilot" rel="nofollow noreferrer">Coding with GitHub Copilot</a>"</p>
<blockquote>
<p>I wonder if the GitHub team might also develop a way of perhaps fine-tuning GitHub Copilot to specific use-cases.</p>
<p>For example, there may be a specific GitHub Copilot models for fastai, JAX, etc. They would be fine-tuned on the source code of of these libraries and codebases that use these libraries.</p>
<p>But making sure that the tool does not provide outdated suggestions would still be a challenge.<br />
I don’t think it would be possible to provide suggestions for a brand-new library that does not have enough codebases using it to train on.</p>
<p>Additionally, for situations like fastai where there are older APIs and newer APIs, when fine-tuning a model, the codebases using the older APIs would have to be filtered out.</p>
</blockquote> | 2022-06-22 16:25:11.640000+00:00 | 2022-06-22 16:25:11.640000+00:00 | null | null | 72,554,328 | <p>We can fine tune language models like <code>BERT</code>, <code>GPT-3</code>.</p>
<p>Can I fine tune <code>GitHub Copilot</code> model?</p>
<p>I have already looked into examples from <a href="https://copilot.github.com/" rel="nofollow noreferrer">https://copilot.github.com/</a> but cant find the details.</p>
<p>Would really appreciate if someone had fine tuned Github Copilot.</p> | 2022-06-09 03:12:02.543000+00:00 | 2022-07-03 22:04:36.377000+00:00 | 2022-06-22 02:56:48.850000+00:00 | github|deep-learning|openai|codex|github-copilot | ['https://github.com/features/copilot', 'https://twitter.com/tomekkorbak', 'https://twitter.com/tomekkorbak/status/1410554250514636805', 'https://arxiv.org/pdf/2106.04985.pdf', 'https://i.stack.imgur.com/ulfPr.png', 'https://tmabraham.github.io/', 'https://tmabraham.github.io/blog/github_copilot'] | 7 |
2,070,024 | <p>For C-style strings, the convention is that a string ends with a <code>0</code>. This means that you can't have a <code>0</code> in a C-style string.</p>
<p>If you have a <code>double</code> value that you don't need, you can store that at the end of your array, and check for it. For example, you might be able to use <code>0</code>, or NaN (Not a number). See how to use NaN in <a href="https://stackoverflow.com/questions/1923837/how-to-use-nan-and-inf-in-c">C</a> or in <a href="https://stackoverflow.com/questions/235386/using-nan-in-c">C++</a>. If you do use a non-NaN number as a sentinel, you should read <a href="http://arxiv.org/abs/cs.PL/0701192" rel="nofollow noreferrer"><em>The pitfalls of verifying floating-point computations</em></a> before you compare floating point numbers for equality.</p>
<p>Of course, since you're using C++, and you don't want to remember your array's size, you should think about using <code>std::vector<double></code> instead.</p> | 2010-01-15 07:16:28.537000+00:00 | 2010-01-15 07:43:40.963000+00:00 | 2017-05-23 12:07:05.010000+00:00 | null | 2,069,949 | <p>Quick question. When you are accessing a character array, I know you can set the pointer to the first element in the array, and use a while look and do something like </p>
<pre><code>while (*ptr != '\0') {
do something
}
</code></pre>
<p>Now is there a double or int equivalent? </p>
<pre><code>#define ARRAY_SIZE 10
double someArray[ARRAY_SIZE] = {0};
double *ptr = someArray;
// then not sure what to do here? I guess I am looking for an equivalent of the above while loop, but don't want to just do:
for (int i = 0; i < ARRAY_SIZE); *ptr++)
cout << *ptr;
</code></pre>
<p>thanks!</p> | 2010-01-15 06:51:57.197000+00:00 | 2010-01-15 08:26:34.990000+00:00 | 2010-01-15 07:00:47.007000+00:00 | c++|pointers | ['https://stackoverflow.com/questions/1923837/how-to-use-nan-and-inf-in-c', 'https://stackoverflow.com/questions/235386/using-nan-in-c', 'http://arxiv.org/abs/cs.PL/0701192'] | 3 |
57,373,834 | <p>If you refer to the Learning Rate Finder (as described by Smith for example here: <a href="https://arxiv.org/abs/1803.09820" rel="nofollow noreferrer">https://arxiv.org/abs/1803.09820</a>), it seems like you can emulate it by using:</p>
<pre><code>learning_rate: {
exponential_decay_learning_rate {
initial_learning_rate: 0.004
decay_steps: 10000
decay_factor: 1.3
}
}
</code></pre>
<p>with a decay_factor above 1.</p>
<p>You will still have to look at the loss and choose the best learning rate yourself though.</p> | 2019-08-06 10:18:04.793000+00:00 | 2019-08-06 10:28:39.153000+00:00 | 2019-08-06 10:28:39.153000+00:00 | null | 57,361,891 | <p>I want to search for the best learning rate using tensorflow object detection api. But in the config file I'm not able to find anything for it. I can add <code>schedule</code> but it can't search for the best learning rate.</p>
<pre><code>learning_rate: {
manual_step_learning_rate {
initial_learning_rate: 0.003
schedule {
step: 6000
learning_rate: .0003
}
schedule {
step: 12000
learning_rate: .00003
}
</code></pre>
<p>Is there any trick or way to search for best learning rate.</p> | 2019-08-05 15:39:21.840000+00:00 | 2019-08-06 10:28:39.153000+00:00 | null | tensorflow|object-detection | ['https://arxiv.org/abs/1803.09820'] | 1 |
63,979,528 | <p>For dense optical flow estimation in real-time setup, <a href="https://arxiv.org/pdf/1612.01925.pdf" rel="nofollow noreferrer">FlowNet</a> is a good option. It can achieve optical flow estimation at a higher FPS. You can take their trained model to perform inference. Since you want to run the estimation in a non-GPU environment, you can try converting the model to <a href="https://github.com/onnx/onnx/blob/master/docs/Overview.md" rel="nofollow noreferrer">ONNX</a> format. A good implementation of FlowNet is available in NVIDIA's Github <a href="https://github.com/NVIDIA/flownet2-pytorch" rel="nofollow noreferrer">repo</a>. I am not sure exactly which algorithm NVIDIA is using in its SDK for optical flow.</p>
<p>The FlowNet2 is built upon previous work of FlowNet to compute large displacement. However, if you are concerned about occlusion then you may check out their follow up work on FlowNet3. Another alternative to FlowNet is <a href="https://arxiv.org/pdf/1709.02371.pdf" rel="nofollow noreferrer">PwC-Net</a>.</p> | 2020-09-20 13:32:19.780000+00:00 | 2020-09-21 04:59:37.507000+00:00 | 2020-09-21 04:59:37.507000+00:00 | null | 63,976,772 | <p>I am working on a hardware-based solution ( without GPU) for dense optical flow to get real-time performance @ 30fps with decent accuracy. Something comparable to or better than NVIDIA’s optical flow SDK. Can someone please suggest good algorithms other than Pyramidal Lukas Kanade and horn Schunck. I found SGM as a good starting point but it’s difficult to implement on FPGA or DSP core. The target is to measure large displacements with occlusion as well as similar to real-world videos.</p>
<p>It would be great if someone could tell what exactly algorithm NVIDIA has used.</p> | 2020-09-20 07:57:39.213000+00:00 | 2020-09-21 10:06:42.777000+00:00 | 2020-09-21 10:06:42.777000+00:00 | computer-vision|hardware-acceleration|opticalflow | ['https://arxiv.org/pdf/1612.01925.pdf', 'https://github.com/onnx/onnx/blob/master/docs/Overview.md', 'https://github.com/NVIDIA/flownet2-pytorch', 'https://arxiv.org/pdf/1709.02371.pdf'] | 4 |
65,650,938 | <p>The part that is confusing you is the <code>Bellman approximation</code> which is used to update the <code>Q-values</code> of a state that is defined as <code>s</code> given an action <code>a</code> is taken.</p>
<p><a href="https://i.stack.imgur.com/SqF75.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SqF75.png" alt="enter image description here" /></a></p>
<p><code>Q</code> for this state, <code>s</code>, and action, <code>a</code>, equals the expected immediate reward and the discounted long-term reward of the destination state.</p>
<p>We take the maximum of the values of the <code>Q-values(or the value of the action)</code> of being at state <code>s'</code> which is the next state going from state <code>s</code>, with an action <code>a'</code>, as the actions we can take when going from a state <code>s</code> to a state <code>s'</code> are a set of mutually exclusive discrete set (i.e., your environment allows you to move in the direction up, left, right or down) and the most optimal action would therefore be the action which results in the highest value of the action.</p>
<p><a href="https://i.stack.imgur.com/Oj5rX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oj5rX.png" alt="enter image description here" /></a></p>
<p>Take the image above as an example. The agent starts at state, <code>s0</code> and is able to move up, left, right, or down which are the actions. The actions that the agent can take are stochastic in nature and not deterministic but it i.e., when the agent intends to go up there is a <code>0.33%</code> chance that the agent might instead go to the left or the right. I will just assign a value of 1 to gamma here.</p>
<p>This is how you calculate the <code>Q-values</code> for the state, <code>s0</code> and action, <code>up</code> with the values of going to the state being the immediate reward received by the agent, <code>V1 = 1, V2 = 2, V3 = 3, V4 = 4</code>.</p>
<pre><code>Q(s0,up) = 0.33 * V1 + 0.33 * V2 0.33 * V4
= 0.33 * 1 + 0.33 * 2 + 0.33 * 4
= 2.31
</code></pre>
<p>Next, if you calculate Q-values for all the other possible states and their actions you would get the following:</p>
<pre><code>Q(s0,left) = 1.98
Q(s0,right) = 2.64
Q(s0,down) = 2.97
</code></pre>
<p>Therefore the final value for the state, <code>s0</code> is the <code>maximum</code> of those actions' values which is <code>2.97</code>. That is all you really trying to do there in the code.</p>
<p>As for what does <code>currentQ[action] = newQ;</code> do, it is performing an update on the current <code>Q-values</code> for an action from its old value to the new updated value at the end of an episode.</p>
<p>One thing you have to understand as to why does it do this, is that the agent updates its <code>Q-values</code> after an episode, then training is done once again and the process is repeated up until the agent manages to complete its goal(for the Atari paper that this algorithm was introduced from, that goal was having a mean score of I think 19 which is equivalent to winning 19 games out of 21 games).</p>
<p>You can read more about the entire process from the <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjRqeDI55DuAhVn4jgGHWo0DDAQFjADegQIAxAC&url=https%3A%2F%2Farxiv.org%2Fabs%2F1312.5602&usg=AOvVaw1wUW9fyPY7pUHTVhWXfO4h" rel="nofollow noreferrer">original paper</a>.</p>
<p>But I think you need more of an understanding of Bellmans equation before that as it is extremely important in understanding Reinforcement Learning. DeepMind has an excellent Youtube series about this that can be <a href="https://www.youtube.com/watch?v=hMbxmRyDw5M" rel="nofollow noreferrer">found here</a>.</p>
<p>Even better there is a <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiq2oD_6ZDuAhWWgtgFHSXeA88QFjABegQIARAC&url=https%3A%2F%2Fwww.andrew.cmu.edu%2Fcourse%2F10-703%2Ftextbook%2FBartoSutton.pdf&usg=AOvVaw0AIbsm2D6IhSbHdz9RPK2i" rel="nofollow noreferrer">free book</a> on Reinforcement Learning from the founding fathers of it, Richard Sutton and Andrew Barto. I believe they go in detail about this in Chapter 4.</p>
<p>Edit:</p>
<p>I am not too sure what you mean by how it affects training but I will outline the entire process for you to understand how training works for this:</p>
<p><a href="https://i.stack.imgur.com/itOaQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/itOaQ.png" alt="enter image description here" /></a></p> | 2021-01-10 07:19:11.487000+00:00 | 2021-01-11 05:20:08.317000+00:00 | 2021-01-11 05:20:08.317000+00:00 | null | 65,635,342 | <p>devs,</p>
<p>I found a bunch of examples of DQN implementations, but because I'm no TensorFlow expert, I'm a little bit confused.</p>
<p>Let's see <a href="https://dumpz.org/c77HNAA4XxGF" rel="nofollow noreferrer">here</a> is one one of them.</p>
<p>I can understand, on the 73rd line, we slice some batch of stored data <code>[{state, action, reward, newState, done}]</code> exactly, then we get <code>currentStates</code> which is <code>[[s1, s2, ...]]</code>, then on 75 we use the model to get <code>currentQs</code> which should be, how I understand, <code>[[act1, act2, ...]]</code>, because our model is used to get action from env's state. The same happens to <code>newCurrentStates</code> and <code>futureQs</code>.</p>
<p>But then on 88, we see <code>let maxFutureQ = Math.max(futureQs);</code>. What happened here? <code>futureQs</code> is an array of arrays with actions probabilities for each futureState? And then <code>maxFutureQ</code> should be an action probability, why then we add this to reward? This part is confusing me.</p>
<p>Also I cannot understand why we need to do <code>currentQ[action] = newQ;</code> on 94.</p>
<p>Please, could someone help me to understand what is going on here and leave comments for lines, maybe?</p>
<p>Thanks in advance.</p>
<p>edit:</p>
<p>discussed code:
<a href="https://i.stack.imgur.com/yfgZu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yfgZu.png" alt="discussed code" /></a></p> | 2021-01-08 19:33:03.727000+00:00 | 2021-01-11 07:38:26.573000+00:00 | 2021-01-11 07:38:26.573000+00:00 | javascript|tensorflow|tensorflow.js|q-learning|dqn | ['https://i.stack.imgur.com/SqF75.png', 'https://i.stack.imgur.com/Oj5rX.png', 'https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjRqeDI55DuAhVn4jgGHWo0DDAQFjADegQIAxAC&url=https%3A%2F%2Farxiv.org%2Fabs%2F1312.5602&usg=AOvVaw1wUW9fyPY7pUHTVhWXfO4h', 'https://www.youtube.com/watch?v=hMbxmRyDw5M', 'https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiq2oD_6ZDuAhWWgtgFHSXeA88QFjABegQIARAC&url=https%3A%2F%2Fwww.andrew.cmu.edu%2Fcourse%2F10-703%2Ftextbook%2FBartoSutton.pdf&usg=AOvVaw0AIbsm2D6IhSbHdz9RPK2i', 'https://i.stack.imgur.com/itOaQ.png'] | 6 |
43,082,582 | <p>It seems you have mostly referred to research on Deep Networks for Object Detection. Prior to the success of deep networks, researchers were looking to to the possibility of using text with image features to implement ideas similar to yours. You might want to refer to papers from ACM Multimedia and IEEE TMM, especially those before 2014.</p>
<p>The problem was that those approaches could not perform as well as the simplest of the deep networks that use only images. There is some work on combining both images and text, such as <a href="https://arxiv.org/pdf/1611.09534.pdf" rel="nofollow noreferrer">this paper</a>. I am sure at least some researchers are already working on this.</p> | 2017-03-29 01:14:34.233000+00:00 | 2017-03-29 01:44:26.610000+00:00 | 2017-03-29 01:44:26.610000+00:00 | null | 43,058,387 | <p>I am new to computer vision, and now I am do some research on object detection. I have read papers about faster RCNN and RFCN, also read YOLO. It seems the biggest problem is the speed? And all of them use image data data only. Are there any models that combines text and image data? Which means we can use the information from text to help detection when the training data is small. For example, when the training data is small, the model cannot tell dogs and cats clearly, but the model could tell there is a bone near that object, and the model gets some information from text that the object near a bone is most likely a dog, thus the model now could tell what the object is. Does this kind of algorithm exist? I haven't found them, hope you could help me. Thanks a lot. </p> | 2017-03-27 23:57:17+00:00 | 2017-03-29 01:44:26.610000+00:00 | null | nlp|computer-vision|object-detection | ['https://arxiv.org/pdf/1611.09534.pdf'] | 1 |
46,204,977 | <p>Not by default. Grid search is easy to use and easy to understand but it suffers from the curse of <a href="https://en.wikipedia.org/wiki/Curse_of_dimensionality" rel="nofollow noreferrer">dimensionality problem</a>. Instead of grid search, Google Cloud ML Engine uses a <a href="https://cloud.google.com/blog/big-data/2017/08/hyperparameter-tuning-in-cloud-machine-learning-engine-using-bayesian-optimization" rel="nofollow noreferrer">Bayesian optimization technique</a> that based on an algorithm called <a href="https://arxiv.org/abs/1012.2599" rel="nofollow noreferrer">Gaussian process bandits</a>. </p>
<p>The underlying technology used by Cloud ML Engine is from a Google Research project <a href="https://research.google.com/pubs/pub46180.html" rel="nofollow noreferrer">Vizier</a> which is a Google-internal service for performing black-box optimization that has become the de facto parameter tuning engine at Google.</p>
<p>However, if you really want to use grid search, you can force Cloud ML Engine to use it by specifying the "algorithm" parameter in your hyperparameter yaml file as described in the <a href="https://cloud.google.com/ml-engine/docs/tensorflow/hyperparameter-tuning-overview#search_algorithms" rel="nofollow noreferrer">Cloud ML Engine documentation</a></p> | 2017-09-13 18:57:37.063000+00:00 | 2018-05-16 14:04:11.617000+00:00 | 2018-05-16 14:04:11.617000+00:00 | null | 46,204,976 | <p>The grid search technique is an easy to use and an embarrassingly parallel approach for finding the best set of hyperparameters for machine learning models. Does Google Cloud Machine Learning (ML) Engine use grid search?</p> | 2017-09-13 18:57:37.063000+00:00 | 2018-05-16 14:04:11.617000+00:00 | null | google-cloud-ml|google-cloud-ml-engine | ['https://en.wikipedia.org/wiki/Curse_of_dimensionality', 'https://cloud.google.com/blog/big-data/2017/08/hyperparameter-tuning-in-cloud-machine-learning-engine-using-bayesian-optimization', 'https://arxiv.org/abs/1012.2599', 'https://research.google.com/pubs/pub46180.html', 'https://cloud.google.com/ml-engine/docs/tensorflow/hyperparameter-tuning-overview#search_algorithms'] | 5 |
43,224,665 | <p>Depending on what you are doing, it might take a lot longer. I had 20x speedups be using a GPU. If you read some Computer Vision papers, they train their networks on ImageNet for about 1-2 weeks. Now imagine if that took 20x longer...</p>
<p>Having said that: There are much simpler tasks. For example, for my <a href="https://arxiv.org/abs/1701.08380" rel="nofollow noreferrer">HASY dataset</a> you can train a reasonable network without a GPU in probably 3 hours. Similar small datasets are MNIST, CIFAR-10, CIFAR-100.</p> | 2017-04-05 07:31:43.273000+00:00 | 2017-04-05 07:31:43.273000+00:00 | null | null | 43,200,846 | <p>As the question already suggests, I am new to deep learning. I know that the learning process of the model will be slow without GPU. If I am willing to wait, Will it be OK if i use CPU only ? </p> | 2017-04-04 07:27:31.863000+00:00 | 2020-04-03 13:32:58.893000+00:00 | null | gpu|deep-learning|gpgpu | ['https://arxiv.org/abs/1701.08380'] | 1 |
42,335,170 | <p>I have bothered on this question for a while too, and I have also seen some papers mention this same issue. Here is a recent paper I found; <a href="https://arxiv.org/pdf/1511.07356.pdf" rel="nofollow noreferrer">Recombinator Networks: Learning Coarse-to-Fine Feature Aggregation</a>. I have not fully read the paper but it seems to bother on your question. I can update this answer as soon as I fully grasp the paper.</p> | 2017-02-20 01:38:28.630000+00:00 | 2017-02-20 01:38:28.630000+00:00 | null | null | 39,037,813 | <p>When it comes to convolutional neural networks there are normally many papers recommending different strategies. I have heard people say that it is an absolute must to add padding to the images before a convolution, otherwise to much spatial information is lost. On the other hand they are happy to use pooling, normally max-pooling, to reduce the size of the images. I guess the thought here is that max pooling reduces the spatial information but also reduces the sensitivity to relative positions, so it is a trade-off?</p>
<p>I have heard other people saying that zero-padding does not keep more information, just more empty data. This is because by adding zeros you will not get a reaction from your kernel anyway when part of the information is missing.</p>
<p>I can imagine that zero-padding works if you have big kernels with "scrap values" in the edges and the source of activation centered in a smaller region of the kernel?</p>
<p>I would be happy to read some papers about the effect of down-sampling using pooling contra not using padding, but I cant find much about it. Any good recommendations or thoughts?
<a href="https://i.stack.imgur.com/mGHdI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mGHdI.png" alt="Spatial down-sampling using convolution contra pooling"></a></p>
<p>Figure: Spatial down-sampling using convolution contra pooling (Researchgate)</p> | 2016-08-19 11:22:11.930000+00:00 | 2017-02-20 01:38:28.630000+00:00 | null | neural-network|convolution|conv-neural-network|downsampling | ['https://arxiv.org/pdf/1511.07356.pdf'] | 1 |
72,440,166 | <p>you can open the file directly from the url and then work on it as a pdf by using <code>urllib.request</code> :</p>
<pre><code>import pdftotext
from urllib.request import urlopen
target_url = "https://arxiv.org/pdf/2012.05439.pdf" # to change.
file = urlopen(target_url)
pdf = pdftotext.PDF(file) # add password if password protected.
# How many pages?
print(len(pdf))
# Iterate over all the pages
for page in pdf:
print(page)
# Read some individual pages
print(pdf[0])
print(pdf[1])
# Read all the text into one string
print("\n\n".join(pdf))
</code></pre> | 2022-05-30 21:23:32.417000+00:00 | 2022-05-30 21:23:32.417000+00:00 | null | null | 72,438,825 | <p>I want to read two PDF files from URL without download. Then I want to extract text using pdftotext</p>
<pre><code>import pdftotext
with open("pdf_path1", "rb") as f:
pdf = pdftotext.PDF(f)
# If it's password-protected
with open("b.pdf", "rb") as f:
pdf = pdftotext.PDF(f, "secret")
# How many pages?
print(len(pdf))
# Iterate over all the pages
for page in pdf:
print(page)
# Read some individual pages
print(pdf[0])
print(pdf[1])
# Read all the text into one string
print("\n\n".join(pdf))
</code></pre>
<p>How can I resolve this error? or is there any other technique available to read PDF from URL?</p> | 2022-05-30 18:46:16.480000+00:00 | 2022-05-30 23:26:52.307000+00:00 | 2022-05-30 18:47:30.903000+00:00 | python-3.x|text-extraction|pdftotext|pdfcompare | [] | 0 |
72,440,765 | <p>You cannot read somebodies PDF online it must be your copy (ALL PDFS MUST BE DOWNLOADED). Your computer can only work with local HTML pages and their contents, thats the way it was, and still is:-</p>
<p>How the web works in just one line, (More graphic methods are available).</p>
<p><code><A HyperRef=HTextTransferProtocol://www.website.html>download to view our BBS pages</a></code></p>
<pre><code>curl -o temp.pdf https://arxiv.org/pdf/2012.05439.pdf & pdftotext -layout -f 1 -l 1 temp.pdf -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1318k 100 1318k 0 0 488k 0 0:00:02 0:00:02 --:--:-- 488k
Scheduling Beyond CPUs for HPC....
</code></pre>
<p><a href="https://i.stack.imgur.com/oFe0l.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oFe0l.png" alt="enter image description here" /></a></p> | 2022-05-30 23:05:44.793000+00:00 | 2022-05-30 23:26:52.307000+00:00 | 2022-05-30 23:26:52.307000+00:00 | null | 72,438,825 | <p>I want to read two PDF files from URL without download. Then I want to extract text using pdftotext</p>
<pre><code>import pdftotext
with open("pdf_path1", "rb") as f:
pdf = pdftotext.PDF(f)
# If it's password-protected
with open("b.pdf", "rb") as f:
pdf = pdftotext.PDF(f, "secret")
# How many pages?
print(len(pdf))
# Iterate over all the pages
for page in pdf:
print(page)
# Read some individual pages
print(pdf[0])
print(pdf[1])
# Read all the text into one string
print("\n\n".join(pdf))
</code></pre>
<p>How can I resolve this error? or is there any other technique available to read PDF from URL?</p> | 2022-05-30 18:46:16.480000+00:00 | 2022-05-30 23:26:52.307000+00:00 | 2022-05-30 18:47:30.903000+00:00 | python-3.x|text-extraction|pdftotext|pdfcompare | ['https://i.stack.imgur.com/oFe0l.png'] | 1 |
56,031,603 | <p>The idea of monads as models of computation can be traced back to the work of Eugenio Moggi. Among Haskell practitioners, the best known paper by Moggi on this matter is <a href="https://www.disi.unige.it/person/MoggiE/ftp/ic91.pdf" rel="noreferrer"><em>Notions of computations as monads</em></a> (1991). Relevant quotes include:</p>
<blockquote>
<p>The [lambda]-calculus is considered a useful mathematical tool in the study of programming languages, since programs can be <em>identified</em> with [lambda]-terms. However, if one goes further and uses [beta][eta]-conversion to prove equivalence of programs, then a gross simplification is introduced (programs are identified with total functions from <em>values</em> to <em>values</em>) that may jeopardise the applicability of theoretical results, In this paper we introduce calculi based on a categorical semantics for <em>computations</em>, that provide a correct basis for proving equivalence of programs for a wide range of <em>notions of computation</em>. [p. 1]</p>
<p>[...]</p>
<p>We do not take as a starting point for proving equivalence of programs the theory of [beta][eta]-conversion, which identifies the denotation of a program (procedure) of type A -> B with a total function from A to B, since this identification wipes out completely behaviours such as non-termination, non-determinism, and side-effects, that can be exhibited by real programs. Instead, we proceed as follows:</p>
<ul>
<li>
<ol>
<li>We take category theory as a general theory of functions and develop on top a categorical semantics of computations based on monads. [...] [p. 1]</li>
</ol>
</li>
</ul>
<p>[...]</p>
<p>The basic idea behind the categorical semantics below is that, in order to interpret a programming language in a category [C], we distinguish the object A of values (of type A) from the object TA of computations (of type A), and take as denotations of programs (of type A) the elements of TA. In particular, we identify the type A with the object of values (of type A) and obtain the object of computations (of type A) by applying an unary type-constructor T to A. We call T a <em>notion of computation</em>, since it abstracts away from the type of values computations may produce. There are many choices for TA corresponding to different notions of computations. [pp. 2-3]</p>
<p>[...]</p>
<p>We have identified monads as important to modeling notions of computations, but <em>computational monads</em> seem to have additional properties; e.g., they have a tensorial strength and may satisfy the mono requirement. It is likely that there are other properties of computational monads still to be identified, and there is no reason to believe that such properties have to be found in the literature on monads. [p. 27 -- thanks danidiaz]</p>
</blockquote>
<p>A related older paper by Moggi, <a href="https://www.disi.unige.it/person/MoggiE/ftp/lics89.pdf" rel="noreferrer"><em>Computational lambda-calculus and monads</em></a> (1989 -- thanks michid for the reference), speaks literally of "computational model[s]":</p>
<blockquote>
<p>A <strong>computational model</strong> is a monad (T;[eta];[mu]) satisfying the <strong>mono requirement</strong>: [eta-A] is a mono for every A [belonging to] C.</p>
<p>There is an alternative description of a monad (see[7]), which is easier to justify computationally. [...] [p. 2]</p>
</blockquote>
<p>This particular bit of terminology was dropped in the <em>Notions of computations as monads</em>, as Moggi sharpened the focus of his presentation on the "alternative description" (namely, Kleisli triples, which are composed by, in Haskell parlance, a type constructor, return and bind). The essence, though, remain the same throughout.</p>
<hr />
<p>Philip Wadler presents the idea with a more practical bent in <a href="http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/baastad.pdf" rel="noreferrer"><em>Monads for functional programming</em></a> (1992):</p>
<blockquote>
<p>The use of monads to structure functional programs is described. Monads provide a convenient framework for simulating effectsfound in other languages, such as global state, exception handling, out-put, or non-determinism. [p. 1]</p>
<p>[...]</p>
<p>Pure functional languages have this advantage: all flow of data is made explicit.And this disadvantage: sometimes it is painfully explicit.</p>
<p>A program in a pure functional language is written as a set of equations. Explicit data flow ensures that the value of an expression depends only on its free variables. Hence substitution of equals for equals is always valid, making such programs especially easy to reason about. Explicit data flow also ensures that the order of computation is irrelevant, making such programs susceptible to lazy evaluation.</p>
<p>It is with regard to modularity that explicit data flow becomes both a blessing and a curse. On the one hand, it is the ultimate in modularity. All data in and all data out are rendered manifest and accessible, providing a maximum of flexibility. On the other hand, it is the nadir of modularity. The essence of an algorithm can become buried under the plumbing required to carry data from its point of creation to its point of use. [p. 2]</p>
<p>[...]</p>
<p>Say it is desired to add error checking, so that the second example above returns a sensible error message. In an impure language, this is easily achieved with the use of exceptions.</p>
<p>In a pure language, exception handling may be mimicked by introducing a type to represent computations that may raise an exception. [pp. 3 -4 -- note this is before monads are introduced as an unifying abstraction.]</p>
<p>[...]</p>
<p>Each of the variations on the interpreter has a similar structure, which may be abstracted to yield the notion of a monad.</p>
<p>In each variation, we introduced a type of computations. Respectively, M represented computations that could raise exceptions, act on state, and generate output. By now the reader will have guessed that M stands for monad. [p. 6]</p>
</blockquote>
<p>This is one of the roots of the usage of "computation" to refer to monadic values.</p>
<hr />
<p>A significant body of later literature makes use of the concept of computation in this manner. For instance, this is the opening passage of <a href="https://arxiv.org/abs/1406.4823" rel="noreferrer"><em>Notions of Computation as Monoids</em></a> by Exequiel Rivas and Mauro Jaskelioff (2014 -- thanks danidiaz for the suggestion):</p>
<blockquote>
<p>When constructing a semantic model of a system or when structuring computer code,there are several notions of computation that one might consider. Monads (Moggi, 1989; Moggi, 1991) are the most popular notion, but other notions,such as arrows (Hughes, 2000) and, more recently, applicative functors (McBride & Paterson, 2008) have been gaining widespread acceptance. Each of these notions of computation has particular characteristics that makes them more suitable for some tasks than for others. Nevertheless, there is much to be gained from unifying all three different notions under a single conceptual framework. [p. 1]</p>
</blockquote>
<p>Another good example is <a href="https://www.sciencedirect.com/science/article/pii/S1571066108003435" rel="noreferrer"><em>Comonadic notions of computation</em></a> by Tarmo Uustalu and Varmo Vene (2000):</p>
<blockquote>
<p>Since the seminal work by Moggi in the late 80s, monads, more precisely, strong monads, have become a generally accepted tool for structuring effectful notions of computation, such as computation with exceptions, output, computation using an environment, state-transforming, nondeterministic and probabilistic computation etc. The idea is to use a Kleisli category as the category of impure, effectful functions, with the Kleisli inclusion giving an embedding of the pure functions from the base category. [...] [p. 263]</p>
<p>[...]</p>
<p>The starting-point in the monadic approach to (call-by-value) effectful computation is the idea that impure, effectful functions from A to B must be nothing else than pure functions from A to TB. Here pure functions live in a base category C and T is an endofunctor on C that describes the notion of effect of interest; it is useful to think of TA as the type of effectful computations of values of a given type A.</p>
<p>For this to work, impure functions must have identities and compose. Therefore T cannot merely be a functor, but must be a monad. [p. 265]</p>
</blockquote>
<hr />
<p>Such uses of "computation" fit the usual computer science notion of <a href="https://en.wikipedia.org/wiki/Model_of_computation" rel="noreferrer">models of computation</a> (see <a href="https://stackoverflow.com/a/56034400/2751851">danidiaz's answer</a> for more on that). In the informal functional programming literature, allusions to monads as models of computation have varying degrees of precision. Still, they generally draw from, or at least are offshoots of, a rigorous idea.</p> | 2019-05-07 23:20:10.617000+00:00 | 2019-05-08 10:35:25.803000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 56,026,072 | <p>What does it mean exactly when people say "a monad is a model of computation"? Does this mean computation in the sense of turing completeness? If so, how?</p>
<p><strong>Clarification</strong>: This question is not about explaining monads but what people mean with "model of computation" in this context and how this relates to monads. See towards the end of <a href="https://stackoverflow.com/a/3273549/402428">this answer</a> for a typical use of this phrase. </p>
<p>In my understanding a turing machine, the theory of recursive functions, lambda calculus etc. are all models of computation and I cannot see how a monad would relate to that if at all. </p> | 2019-05-07 15:35:37.597000+00:00 | 2019-05-08 10:35:25.803000+00:00 | 2019-05-07 18:48:37.833000+00:00 | haskell|monads|category-theory | ['https://www.disi.unige.it/person/MoggiE/ftp/ic91.pdf', 'https://www.disi.unige.it/person/MoggiE/ftp/lics89.pdf', 'http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/baastad.pdf', 'https://arxiv.org/abs/1406.4823', 'https://www.sciencedirect.com/science/article/pii/S1571066108003435', 'https://en.wikipedia.org/wiki/Model_of_computation', 'https://stackoverflow.com/a/56034400/2751851'] | 7 |
51,847,000 | <p>At CNN, it is common to do dimensionality reduction with a kernel size of <code>1x1</code>. Thereby only the filter/feature map dimension is affected and the spatial information is kept intact, because the input is 1:1 mapped to the output. </p>
<p>Hereby a good example is the <a href="https://arxiv.org/pdf/1409.4842.pdf" rel="nofollow noreferrer">Inception</a> architecture, which uses 1x1 convolutions to reduce the dimensionality in the inception modules. </p> | 2018-08-14 17:38:32.460000+00:00 | 2018-08-14 17:38:32.460000+00:00 | null | null | 51,846,766 | <p>I was hoping to use CNN as a dimensionality reduction for my LSTM layers.</p>
<p>I have a panel dataset as the following:</p>
<pre><code>sequence of days = 5065
lags = 14 days (those are time series lags)
features = 2767
</code></pre>
<p>Thus, <code>[5065, 14, 2767]</code></p>
<p>As you can see I have more than half as many features as data points, and I wanted to reduce that. Ideally, I wanted to feed my LSTM layers with compressed feature information with something like 32 features, hopefully in the following shape:</p>
<pre><code>[5065, 14, 32]
</code></pre>
<p>However, when setting up the CNN, I understand that filters should be 32, but what about my kernel size? I'm not sure I'm doing the right thing.</p> | 2018-08-14 17:22:02.693000+00:00 | 2018-08-14 18:32:09.657000+00:00 | 2018-08-14 18:32:09.657000+00:00 | conv-neural-network|lstm|dimensionality-reduction | ['https://arxiv.org/pdf/1409.4842.pdf'] | 1 |
55,286,578 | <p>Fashion MNIST is a harder problem than MNIST. Therefore, it is not surprising that your architecture does not perform as good.</p>
<p>If you want to achieve a higher accuracy, you may want to try a method described in <a href="https://arxiv.org/pdf/1708.04896.pdf" rel="nofollow noreferrer">this paper</a>.</p> | 2019-03-21 17:59:46.570000+00:00 | 2019-03-21 17:59:46.570000+00:00 | null | null | 55,268,035 | <p>I am playing around with Pytorch and i implemented a CNN on MNIST dataset which has 99+% accuracy on both train and test sets.</p>
<p>I decided to switch to Fashion MNIST in order to see how the architecture of my network performs. I got 95% accuracy on the train set and 91% on test set.</p>
<p>Then, i started trying to improve that performance by tuning the model.</p>
<p>Briefly, my model looks like this: </p>
<pre><code> Conv -> ReLU -> Batch norm -> Max pool ->
Conv -> ReLU -> Batch norm -> Max pool ->
Conv -> ReLU -> Batch norm -> Max pool ->
Conv -> ReLU -> Batch norm -> Max pool ->
Linear -> ReLu -> Linear -> Output
Optimizer: Stochastic Gradient Descent
Transformations: ToTensor() only
</code></pre>
<p>My tests where removing the last Conv layer, adding average pooling instead of max pooling in the last Conv layer, inspecting the train loss curve in order to adjust the learning rate statically or dynamically and change the batch size.</p>
<p>However with the above combinations either my model will overfit (eg. 97% train, 89% test) or it will not have the best performance (eg 91% train, 89% test).</p>
<p>Am i missing something? Am i doing something wrong? Are there any other tuning parameters that i need to adjust that i didn't think of?</p>
<p>Thank you</p> | 2019-03-20 18:42:31.520000+00:00 | 2019-03-21 17:59:46.570000+00:00 | null | deep-learning|computer-vision|conv-neural-network|pytorch | ['https://arxiv.org/pdf/1708.04896.pdf'] | 1 |
73,550,280 | <p>I am wondering about this as well. The best thing I have found is <a href="https://arxiv.org/pdf/1603.02754.pdf" rel="nofollow noreferrer">the original XGBoost paper</a>. Section 2.1 makes it sound as if XGBoost uses regression tree as a main building block for both regression and classification. If this is correct, then Alpha and Lambda probably work in the same way as they do in the linear regression.</p>
<p>Gamma controls how deep trees will be. Large gamma means large hurdle to add another tree level. So larger gamma regularizes the model by growing shallower trees. E. g., depth-2 tree has smaller range of predicted values than depth-10 tree, so such model will have lower variance.</p> | 2022-08-31 02:26:27.447000+00:00 | 2022-08-31 02:26:27.447000+00:00 | null | null | 68,091,037 | <p>I have a question to ask:</p>
<p>How exactly are different L1 and L2 regularization terms on weights in xgboost algorithm.</p>
<p>As I understand, L1 is used by LASSO and L2 is used by RIDGE regression and L1 can shrink to 0, L2 can't. I understand the mechanics when using simple linear regression, but I have no clue how it works in tree based models.</p>
<p>Further more, gamma is another parameter, that makes the model more conservative. How should I notice the difference between L1/L2 and gamma parameter.</p>
<p>I have found in documentation very little to this problem:</p>
<p><strong>lambda [default=1, alias: reg_lambda]</strong></p>
<ul>
<li>L2 regularization term on weights. Increasing this value will make
model more conservative.</li>
</ul>
<p><strong>alpha [default=0, alias: reg_alpha]</strong></p>
<ul>
<li>L1 regularization term on weights. Increasing this value will make
model more conservative.</li>
</ul>
<p><strong>gamma [default=0, alias: min_split_loss]</strong></p>
<ul>
<li>Minimum loss reduction required to make a further partition on a leaf
node of the tree. The larger gamma is, the more conservative the
algorithm will be.</li>
</ul>
<p>All of them ranging from 0 to inf.</p>
<p>Thanks in advance for any answer/comment!</p> | 2021-06-22 21:44:04.403000+00:00 | 2022-08-31 02:26:27.447000+00:00 | 2021-06-25 13:58:58.623000+00:00 | python|pandas|xgboost|hyperparameters|optuna | ['https://arxiv.org/pdf/1603.02754.pdf'] | 1 |
21,154,776 | <p>You can look at this as an exercise in group theory, by considering each sort of move as a permutation. You then need to find out if the scrambled order of the cube amounts to a product of some of the available permutations in some order and, if so, what that order is.</p>
<p>It turns out that there are algorithms to work this out, and some very sophisticated, and computer packages that implement them. For the packages and the subject one starting point is <a href="http://en.wikipedia.org/wiki/Computational_group_theory" rel="nofollow noreferrer">http://en.wikipedia.org/wiki/Computational_group_theory</a>.</p>
<p>One reference to an implementable algorithm is by Knuth at <a href="http://arxiv.org/pdf/math.GR/9201304.pdf" rel="nofollow noreferrer">http://arxiv.org/pdf/math.GR/9201304.pdf</a>. I have implemented a version of this, so it is doable, but the paper is very dense - see my reference to it at <a href="https://stackoverflow.com/questions/15733394/regarding-approach-to-solving-sliding-tiles-puzzle">Regarding approach to solving sliding tiles puzzle</a>. If you know more group theory than I do, you will be able to read even denser papers and implement more efficient algorithms. Oh - if you work through the paper you should be able to first of all find if the problem is solvable, and then, in theory, find a sequence of permutations that solves it, but that sequence may be impractically long.</p>
<p>This particular algorithm is not completely different from the scheme that you have outlined above, in that it looks for combinations of the available moves that keep some of the objects been permuted fixed, while restoring one other object to its proper place.</p> | 2014-01-16 06:29:11.283000+00:00 | 2014-01-17 19:18:26.150000+00:00 | 2017-05-23 11:53:34.627000+00:00 | null | 21,151,342 | <p>I want to write cubesolver for Rubik's cube of <em>any</em> size.</p>
<p>I know the way how cubes bigger than 3x3x3 can be solved:</p>
<ul>
<li>First we need to solve the center (flat) fields of cube, so they will look like at the picture.</li>
</ul>
<p><img src="https://i.stack.imgur.com/aveEH.png" alt="Cube with solved centers"></p>
<ul>
<li>Second, we solve the edges:</li>
</ul>
<p><img src="https://i.stack.imgur.com/oJXwv.png" alt="Cube with solved edges"></p>
<ul>
<li>And finally, we can reduce whole problem to solving 3x3x3 cube:</li>
</ul>
<p><img src="https://i.stack.imgur.com/oGUeR.png" alt="4x4x4 cube reduced into 3x3x3 cube"></p>
<hr>
<p>That sounds very easy, but the problem is that ways to solve centers and edges depends on cube size. For 3x3x3 algorithm for solving centers and edges has 0 moves, for 4x4x4 it is longer, and for 5x5x5 it is even more longer.</p>
<p>But how can I compute these moves? Is there any simple way?</p>
<p>Thanks in advance!</p> | 2014-01-16 00:43:51.910000+00:00 | 2014-01-17 19:18:26.150000+00:00 | null | algorithm|rubiks-cube | ['http://en.wikipedia.org/wiki/Computational_group_theory', 'http://arxiv.org/pdf/math.GR/9201304.pdf', 'https://stackoverflow.com/questions/15733394/regarding-approach-to-solving-sliding-tiles-puzzle'] | 3 |
61,764,326 | <p>Firstly, see <a href="https://stackoverflow.com/questions/34090734/how-to-use-nltk-regex-pattern-to-extract-a-specific-phrase-chunk">How to use nltk regex pattern to extract a specific phrase chunk?</a> </p>
<p>Lets see what's the POS tags for the sentence:</p>
<pre><code>from nltk import word_tokenize, pos_tag
text = "Operating profit margin was 8.3%, compared to 11.8% a year earlier."
pos_tag(word_tokenize(text))
</code></pre>
<p>[out]:</p>
<pre><code>[('Operating', 'NN'),
('profit', 'NN'),
('margin', 'NN'),
('was', 'VBD'),
('8.3', 'CD'),
('%', 'NN'),
(',', ','),
('compared', 'VBN'),
('to', 'TO'),
('11.8', 'CD'),
('%', 'NN'),
('a', 'DT'),
('year', 'NN'),
('earlier', 'RBR'),
('.', '.')]
</code></pre>
<h1>First gotcha! No <code>JJ</code> in any of the tags</h1>
<p>There's no <code>JJ</code> tag in any of the POS in that sentence. </p>
<h1>Lets head back to the paper <a href="https://arxiv.org/pdf/1811.11008.pdf" rel="noreferrer">https://arxiv.org/pdf/1811.11008.pdf</a></h1>
<p><a href="https://i.stack.imgur.com/PyBgx.png" rel="noreferrer"><img src="https://i.stack.imgur.com/PyBgx.png" alt="enter image description here"></a></p>
<h1>Thinking though, the <code>NP JJ</code> isn't the ultimate goal; the ultimate goal is to produce the <code>UP</code> or <code>DOWN</code> label based on some heuristics.</h1>
<p>Lets rephrase the steps:</p>
<ol>
<li><p>Parse the sentence with a <strong>parser</strong> (in this case <em>regular expression parser using some sort of grammar</em>)</p></li>
<li><p>Identify signal that the sentence has <strong>a pattern</strong> that can tell use about the ultimate label.</p>
<p>2a. Traverse the parse tree to extract <strong>another pattern</strong> that tells us about the performance indicator and numeric values. </p>
<p>2b. Use the extracted extracted numeric values to determine the directionality <code>UP</code> / <code>DOWN</code> using <strong>some heuristics</strong></p>
<p>2c. Tag the sentence with the <code>UP</code> / <code>Down</code> identified in (2b)</p></li>
</ol>
<h1>Lets see which component we can build first.</h1>
<blockquote>
<p>2b. extract <strong>another pattern</strong> that tells us about the performance indicator and numeric values.</p>
</blockquote>
<p>We know the output to some percentage is always <code>CD NN</code> from </p>
<pre><code>('8.3', 'CD'), ('%', 'NN')
('11.8', 'CD'), ('%', 'NN')
</code></pre>
<p>So lets try catching that in the grammar. </p>
<pre><code>patterns = """
PERCENT: {<CD><NN>}
"""
PChunker = RegexpParser(patterns)
PChunker.parse(pos_tag(word_tokenize(text)))
</code></pre>
<p>[out]:</p>
<pre><code>Tree('S', [('Operating', 'NN'), ('profit', 'NN'), ('margin', 'NN'), ('was', 'VBD'),
Tree('PERCENT', [('8.3', 'CD'), ('%', 'NN')]),
(',', ','), ('compared', 'VBN'), ('to', 'TO'),
Tree('PERCENT', [('11.8', 'CD'), ('%', 'NN')]),
('a', 'DT'), ('year', 'NN'), ('earlier', 'RBR'), ('.', '.')])
</code></pre>
<p>Now, how do we get this:</p>
<blockquote>
<ol start="2">
<li>Identify signal that the sentence has a pattern that can tell use about the ultimate label.</li>
</ol>
</blockquote>
<p>We know that <code><PERCENT> compared to <PERCENT></code> is a good pattern, so lets try to encode it. </p>
<p>We know that <code>compared to</code> has the tags <code>VBN TO</code> from </p>
<pre><code> ('8.3', 'CD'),
('%', 'NN'),
(',', ','),
('compared', 'VBN'),
('to', 'TO'),
('11.8', 'CD'),
('%', 'NN'),
</code></pre>
<p>How about this:</p>
<pre><code>patterns = """
PERCENT: {<CD><NN>}
P2P: {<PERCENT><.*>?<VB.*><TO><PERCENT>}
"""
PChunker = RegexpParser(patterns)
PChunker.parse(pos_tag(word_tokenize(text)))
</code></pre>
<p>[out]:</p>
<pre><code>Tree('S', [('Operating', 'NN'), ('profit', 'NN'), ('margin', 'NN'), ('was', 'VBD'),
Tree('P2P', [
Tree('PERCENT', [('8.3', 'CD'), ('%', 'NN')]),
(',', ','), ('compared', 'VBN'), ('to', 'TO'),
Tree('PERCENT', [('11.8', 'CD'), ('%', 'NN')])]
),
('a', 'DT'), ('year', 'NN'), ('earlier', 'RBR'), ('.', '.')]
)
</code></pre>
<h1>But that pattern could have been any arbitrary number. We need a signal for the <code>performance indicator</code></h1>
<p>Since I'm no domain expert in the financial domain, simply using the existence of <code>operating profit margin</code> might be a good signal, i.e. </p>
<pre><code>from nltk import word_tokenize, pos_tag, RegexpParser
patterns = """
PERCENT: {<CD><NN>}
P2P: {<PERCENT><.*>?<VB.*><TO><PERCENT>}
"""
PChunker = RegexpParser(patterns)
text = "Operating profit margin was 8.3%, compared to 11.8% a year earlier."
indicators = ['operating profit margin']
for i in indicators:
if i in text.lower():
print(PChunker.parse(pos_tag(word_tokenize(text))))
</code></pre>
<p>[out]:</p>
<pre><code>(S
Operating/NN
profit/NN
margin/NN
was/VBD
(P2P
(PERCENT 8.3/CD %/NN)
,/,
compared/VBN
to/TO
(PERCENT 11.8/CD %/NN))
a/DT
year/NN
earlier/RBR
./.)
</code></pre>
<h1>Now how do we get the <code>UP</code> / <code>DOWN</code>?</h1>
<blockquote>
<p>2b. Use the extracted extracted numeric values to determine the directionality UP / DOWN using some heuristics</p>
</blockquote>
<p>Just from the example sentence, other than "earlier" nothing else tells us about antecedence of the numbers. </p>
<p>So lets hypothesize this, if we have the pattern <code>PERCENT VBN TO PERCENT earlier</code>, we say that the 2nd percent is an older number. </p>
<pre><code>import nltk
from nltk import word_tokenize, pos_tag, RegexpParser
patterns = """
PERCENT: {<CD><NN>}
P2P: {<PERCENT><.*>?<VB.*><TO><PERCENT><.*>*<RBR>}
"""
def traverse_tree(tree, label=None):
# print("tree:", tree)
for subtree in tree:
if type(subtree) == nltk.tree.Tree and subtree.label() == label:
yield subtree
PChunker = RegexpParser(patterns)
parsed_text = PChunker.parse(pos_tag(word_tokenize(text)))
for p2p in traverse_tree(parsed_text, 'P2P'):
print(p2p)
</code></pre>
<p>[out]:</p>
<pre><code>(P2P
(PERCENT 8.3/CD %/NN)
,/,
compared/VBN
to/TO
(PERCENT 11.8/CD %/NN)
a/DT
year/NN
earlier/RBR)
</code></pre>
<h1>And the <code>UP</code> / <code>DOWN</code> label?</h1>
<pre><code>import nltk
from nltk import word_tokenize, pos_tag, RegexpParser
patterns = """
PERCENT: {<CD><NN>}
P2P: {<PERCENT><.*>?<VB.*><TO><PERCENT><.*>*<RBR>}
"""
PChunker = RegexpParser(patterns)
def traverse_tree(tree, label=None):
# print("tree:", tree)
for subtree in tree:
if type(subtree) == nltk.tree.Tree and subtree.label() == label:
yield subtree
def labelme(text):
parsed_text = PChunker.parse(pos_tag(word_tokenize(text)))
for p2p in traverse_tree(parsed_text, 'P2P'):
# Check if the subtree ends with "earlier".
if p2p.leaves()[-1] == ('earlier', 'RBR'):
# Check if which percentage is larger.
percentages = [float(num[0]) for num in p2p.leaves() if num[1] == 'CD']
# Sanity check that there's only 2 numbers from our pattern.
assert len(percentages) == 2
if percentages[0] > percentages[1]:
return 'DOWN'
else:
return 'UP'
text = "Operating profit margin was 8.3%, compared to 11.8% a year earlier."
labelme(text)
</code></pre>
<h1>Now the question begets...</h1>
<p>**Do you want to write so many rules and catch them using the <code>labelme()</code> above? **</p>
<p><strong>Are the patterns you write foolproof?</strong> </p>
<p>E.g. will there be a case that the pattern to compare percentages using the indicator and "earlier" will not be "UP" or "DOWN" as expected</p>
<p><strong>Why are we writing rules in the AI age?</strong> </p>
<p><strong>Do you already have humanly annotated data where there are sentences and their corresponding UP/DOWN labels?</strong> If so, let me suggest something like <a href="https://allennlp.org/tutorials" rel="noreferrer">https://allennlp.org/tutorials</a> or <a href="https://github.com/huggingface/transformers/blob/master/notebooks/03-pipelines.ipynb" rel="noreferrer">https://github.com/huggingface/transformers/blob/master/notebooks/03-pipelines.ipynb</a></p> | 2020-05-13 00:29:45.390000+00:00 | 2020-05-13 01:30:14.027000+00:00 | 2020-05-13 01:30:14.027000+00:00 | null | 61,756,189 | <p>I'm working on replicating an algorithm describe in this paper: <a href="https://arxiv.org/pdf/1811.11008.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1811.11008.pdf</a></p>
<p>On the last page it describes extracting a leaf defined in the grammar labelled 'NP JJ' using the following example: Operating profit margin was 8.3%, compared to 11.8% a year earlier.</p>
<p>I'm expecting to see a leaf labelled 'NP JJ' but I'm not. I'm tearing my hair out as to why (relatively new to regular expressions.) </p>
<pre><code>def split_sentence(sentence_as_string):
''' function to split sentence into list of words
'''
words = word_tokenize(sentence_as_string)
return words
def pos_tagging(sentence_as_list):
words = nltk.pos_tag(sentence_as_list)
return words
def get_regex(sentence, grammar):
sentence = pos_tagging(split_sentence(sentence));
cp = nltk.RegexpParser(grammar)
result = cp.parse(sentence)
return result
example_sentence = "Operating profit margin was 8.3%, compared to 11.8% a year earlier."
grammar = """JJ : {< JJ.∗ > ∗}
V B : {< V B.∗ >}
NP : {(< NNS|NN >)∗}
NP P : {< NNP|NNP S >}
RB : {< RB.∗ >}
CD : {< CD >}
NP JJ : : {< NP|NP P > +(< (>< .∗ > ∗ <) >) ∗ (< IN >< DT > ∗ < RB > ∗ < JJ > ∗ < NP|NP P >) ∗ < RB > ∗(< V B >< JJ >< NP >)∗ < V B > (< DT >< CD >< NP >) ∗ < NP|NP P > ∗ < CD > ∗ < .∗ > ∗ < CD > ∗| < NP|NP P >< IN >< NP|NP P >< CD >< .∗ > ∗ <, >< V B > < IN >< NP|NP P >< CD >}"""
grammar = grammar.replace('∗','*')
tree = get_regex(example_sentence, grammar)
print(tree)
</code></pre> | 2020-05-12 15:56:23.113000+00:00 | 2020-12-22 05:43:01.777000+00:00 | 2020-05-13 14:14:01.413000+00:00 | python|regex|nlp|nltk | ['https://stackoverflow.com/questions/34090734/how-to-use-nltk-regex-pattern-to-extract-a-specific-phrase-chunk', 'https://arxiv.org/pdf/1811.11008.pdf', 'https://i.stack.imgur.com/PyBgx.png', 'https://allennlp.org/tutorials', 'https://github.com/huggingface/transformers/blob/master/notebooks/03-pipelines.ipynb'] | 5 |
57,943,594 | <p>You better leave the gradients intact and make your optimizer so that it will count the effects you need.</p>
<p>Gradients will in most cases be deleted before a new forward anyway.</p>
<p>Some newer algorithms such as <a href="https://arxiv.org/abs/1904.00962" rel="nofollow noreferrer">Lamb</a> do the trick of parameter-wise (layer-wise) gradient normalization (MSE in this case) like you plan.</p>
<p>Also, check the <a href="https://arxiv.org/abs/1805.08318" rel="nofollow noreferrer">SAGAN</a> paper and why they used <code>nn.utils.spectral_norm</code> since you mantioned GANs.</p> | 2019-09-15 11:17:10.840000+00:00 | 2019-09-15 11:23:42.253000+00:00 | 2019-09-15 11:23:42.253000+00:00 | null | 57,931,967 | <p>In this <a href="https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html" rel="nofollow noreferrer">GAN tutorial</a>, if you scroll down to the training loop you can see they combine the gradients
<code>errD = errD_real + errD_fake</code> like this. Where <code>errD_real = criterion(output, label)</code> and <code>errD_fake = criterion(output, label)</code> and <code>criterion = nn.BCELoss()</code>. I want to do the same thing but before doing a backward pass I want to normalize both gradients to the lower Euclidean norm of the two. How would I do that?</p>
<p>I know I can access the gradients of each weight individually on netD by printing out <code>netD.weight.grad</code>, but is there some way to batchnorm them to the lower Euclidean norm of the two?</p>
<p>Here's the part of the training loop I'm talking about:</p>
<pre><code>for epoch in range(num_epochs):
# For each batch in the dataloader
for i, data in enumerate(dataloader, 0):
############################
# (1) Update D network: maximize log(D(x)) + log(1 - D(G(z)))
###########################
## Train with all-real batch
netD.zero_grad()
# Format batch
real_cpu = data[0].to(device)
b_size = real_cpu.size(0)
label = torch.full((b_size,), real_label, device=device)
# Forward pass real batch through D
output = netD(real_cpu).view(-1)
# Calculate loss on all-real batch
errD_real = criterion(output, label)
# Calculate gradients for D in backward pass
errD_real.backward()
D_x = output.mean().item()
## Train with all-fake batch
# Generate batch of latent vectors
noise = torch.randn(b_size, nz, 1, 1, device=device)
# Generate fake image batch with G
fake = netG(noise)
label.fill_(fake_label)
# Classify all fake batch with D
output = netD(fake.detach()).view(-1)
# Calculate D's loss on the all-fake batch
errD_fake = criterion(output, label)
# Calculate the gradients for this batch
errD_fake.backward()
D_G_z1 = output.mean().item()
# Add the gradients from the all-real and all-fake batches
errD = errD_real + errD_fake
# Update D
optimizerD.step()
...
</code></pre> | 2019-09-14 02:13:18.723000+00:00 | 2019-09-15 11:23:42.253000+00:00 | null | pytorch|normalization | ['https://arxiv.org/abs/1904.00962', 'https://arxiv.org/abs/1805.08318'] | 2 |
61,112,127 | <p>Recently there is theoretical work on this <a href="https://arxiv.org/abs/1809.09953" rel="nofollow noreferrer">https://arxiv.org/abs/1809.09953</a>. Assuming you use a RELU MLP, all hidden layers have the same number of nodes and your loss function and true function that you're approximating with a neural network obey some technical properties (in the paper), you can choose your depth to be of order $\log(n)$ and your width of hidden layers to be of order $n^{d/(2(\beta+d))}\log^2(n)$. Here $n$ is your sample size, $d$ is the dimension of your input vector, and $\beta$ is a smoothness parameter for your true function. Since $\beta$ is unknown, you will probably want to treat it as a hyperparameter.</p>
<p>Doing this you can guarantee that with probability that converges to $1$ as function of sample size your approximation error converges to $0$ as a function of sample size. They give the rate. Note that this isn't guaranteed to be the 'best' architecture, but it can at least give you a good place to start with. Further, my own experience suggests that things like dropout can still help in practice.</p> | 2020-04-09 00:29:46.123000+00:00 | 2020-04-09 00:29:46.123000+00:00 | null | null | 10,565,868 | <p>If we have 10 eigenvectors then we can have 10 neural nodes in input layer.If we have 5 output classes then we can have 5 nodes in output layer.But what is the criteria for choosing number of hidden layer in a MLP and how many neural nodes in 1 hidden layer?</p> | 2012-05-12 17:18:08.477000+00:00 | 2020-04-09 00:29:46.123000+00:00 | 2015-09-16 21:42:29.927000+00:00 | machine-learning|neural-network|deep-learning|perceptron | ['https://arxiv.org/abs/1809.09953'] | 1 |
61,520,060 | <p>The question is quite old, my previous answer about xgboost seems oudated given the latest developments of <a href="https://github.com/Microsoft/LightGBM/" rel="nofollow noreferrer">LightGBM</a> implementing various tree based learning algorithms :</p>
<ul>
<li><a href="https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree.pdf" rel="nofollow noreferrer">GBDT</a>, Gradient boosting decision tree</li>
<li><a href="https://arxiv.org/pdf/1505.01866.pdf" rel="nofollow noreferrer">DART</a>, or Dropouts meet Multiple Additive Regression Trees</li>
<li>GOSS, or Gradient-based One-Side Sampling</li>
<li>Random Forest</li>
</ul>
<p>It also has a <a href="https://lightgbm.readthedocs.io/en/latest/Python-API.html" rel="nofollow noreferrer">Python API</a>.</p> | 2020-04-30 09:22:02.877000+00:00 | 2020-04-30 09:22:02.877000+00:00 | null | null | 9,035,754 | <p>Do you know of a good library for gradient boosting tree machine learning?</p>
<p>preferably:</p>
<ul>
<li>with good algorithms such as AdaBoost, TreeBoost, AnyBoost, LogitBoost, etc</li>
<li>with configurable weak classifiers</li>
<li>capable of both classification and prediction (regression)</li>
<li>with all kinds of allowed signals: numbers, categories or free text</li>
<li>C/C++ or Python</li>
<li>opensource</li>
</ul>
<p>So far I have found <a href="http://www.multiboost.org/home" rel="noreferrer">http://www.multiboost.org/home</a> which looks good. But I wonder if there are other libraries?</p> | 2012-01-27 15:30:55.687000+00:00 | 2020-04-30 09:22:02.877000+00:00 | null | python|c|machine-learning | ['https://github.com/Microsoft/LightGBM/', 'https://papers.nips.cc/paper/6907-lightgbm-a-highly-efficient-gradient-boosting-decision-tree.pdf', 'https://arxiv.org/pdf/1505.01866.pdf', 'https://lightgbm.readthedocs.io/en/latest/Python-API.html'] | 4 |
62,806,906 | <p>I read the recommended papers in the answer and comments from
<a href="https://stackoverflow.com/a/40295999/8625228">https://stackoverflow.com/a/40295999/8625228</a></p>
<p>From Ioffe and Szegedy (2015)’s point of view, only use BN in the
network structure. Li et al. (2018) give the statistical and
experimental analyses, that there is a variance shift when the
practitioners use Dropout before BN. Thus, Li et al. (2018) recommend
applying Dropout after all BN layers.</p>
<p>From Ioffe and Szegedy (2015)’s point of view, BN is located
<strong>inside/before</strong> the activation function. However, Chen et al. (2019)
use an IC layer which combines dropout and BN, and Chen et al. (2019)
recommends use BN after ReLU.</p>
<p>On the safety background, I use Dropout or BN only in the network.</p>
<p>Chen, Guangyong, Pengfei Chen, Yujun Shi, Chang-Yu Hsieh, Benben Liao,
and Shengyu Zhang. 2019. “Rethinking the Usage of Batch Normalization
and Dropout in the Training of Deep Neural Networks.” <em>CoRR</em>
abs/1905.05928. <a href="http://arxiv.org/abs/1905.05928" rel="noreferrer">http://arxiv.org/abs/1905.05928</a>.</p>
<p>Ioffe, Sergey, and Christian Szegedy. 2015. “Batch Normalization:
Accelerating Deep Network Training by Reducing Internal Covariate
Shift.” <em>CoRR</em> abs/1502.03167. <a href="http://arxiv.org/abs/1502.03167" rel="noreferrer">http://arxiv.org/abs/1502.03167</a>.</p>
<p>Li, Xiang, Shuo Chen, Xiaolin Hu, and Jian Yang. 2018. “Understanding
the Disharmony Between Dropout and Batch Normalization by Variance
Shift.” <em>CoRR</em> abs/1801.05134. <a href="http://arxiv.org/abs/1801.05134" rel="noreferrer">http://arxiv.org/abs/1801.05134</a>.</p> | 2020-07-09 03:25:36.847000+00:00 | 2020-07-25 02:38:52.630000+00:00 | 2020-07-25 02:38:52.630000+00:00 | null | 39,691,902 | <p><em>The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow.</em></p>
<p>When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried about the ordering?</p>
<p>It seems possible that if I use dropout followed immediately by batch normalization there might be trouble. For example, if the shift in the batch normalization trains to the larger scale numbers of the training outputs, but then that same shift is applied to the smaller (due to the compensation for having more outputs) scale numbers without dropout during testing, then that shift may be off. Does the TensorFlow batch normalization layer automatically compensate for this? Or does this not happen for some reason I'm missing?</p>
<p>Also, are there other pitfalls to look out for in when using these two together? For example, assuming I'm using them in the correct order in regards to the above (assuming there <em>is</em> a correct order), could there be trouble with using both batch normalization and dropout on multiple successive layers? I don't immediately see a problem with that, but I might be missing something.</p>
<p>Thank you much!</p>
<p><strong>UPDATE:</strong></p>
<p>An experimental test <em>seems</em> to suggest that ordering <em>does</em> matter. I ran the same network twice with only the batch norm and dropout reverse. When the dropout is before the batch norm, validation loss seems to be going up as training loss is going down. They're both going down in the other case. But in my case the movements are slow, so things may change after more training and it's just a single test. A more definitive and informed answer would still be appreciated.</p> | 2016-09-25 21:12:23.370000+00:00 | 2022-06-16 12:41:45.240000+00:00 | 2019-02-03 21:14:55.993000+00:00 | python|neural-network|tensorflow|conv-neural-network | ['https://stackoverflow.com/a/40295999/8625228', 'http://arxiv.org/abs/1905.05928', 'http://arxiv.org/abs/1502.03167', 'http://arxiv.org/abs/1801.05134'] | 4 |
54,554,286 | <p>Based on the <a href="https://arxiv.org/abs/1801.05134" rel="nofollow noreferrer">research paper</a> for better performance we should use BN before applying Dropouts</p> | 2019-02-06 13:01:58.687000+00:00 | 2019-02-06 13:01:58.687000+00:00 | null | null | 39,691,902 | <p><em>The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow.</em></p>
<p>When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried about the ordering?</p>
<p>It seems possible that if I use dropout followed immediately by batch normalization there might be trouble. For example, if the shift in the batch normalization trains to the larger scale numbers of the training outputs, but then that same shift is applied to the smaller (due to the compensation for having more outputs) scale numbers without dropout during testing, then that shift may be off. Does the TensorFlow batch normalization layer automatically compensate for this? Or does this not happen for some reason I'm missing?</p>
<p>Also, are there other pitfalls to look out for in when using these two together? For example, assuming I'm using them in the correct order in regards to the above (assuming there <em>is</em> a correct order), could there be trouble with using both batch normalization and dropout on multiple successive layers? I don't immediately see a problem with that, but I might be missing something.</p>
<p>Thank you much!</p>
<p><strong>UPDATE:</strong></p>
<p>An experimental test <em>seems</em> to suggest that ordering <em>does</em> matter. I ran the same network twice with only the batch norm and dropout reverse. When the dropout is before the batch norm, validation loss seems to be going up as training loss is going down. They're both going down in the other case. But in my case the movements are slow, so things may change after more training and it's just a single test. A more definitive and informed answer would still be appreciated.</p> | 2016-09-25 21:12:23.370000+00:00 | 2022-06-16 12:41:45.240000+00:00 | 2019-02-03 21:14:55.993000+00:00 | python|neural-network|tensorflow|conv-neural-network | ['https://arxiv.org/abs/1801.05134'] | 1 |
59,001,644 | <p>I found a paper that explains the disharmony between Dropout and Batch Norm(BN). The key idea is what they call the <strong>"variance shift"</strong>. This is due to the fact that dropout has a different behavior between training and testing phases, which shifts the input statistics that BN learns.
The main idea can be found in this figure which is taken from this <a href="https://arxiv.org/abs/1801.05134" rel="noreferrer">paper</a>.
<a href="https://i.stack.imgur.com/nptD6.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nptD6.png" alt="enter image description here" /></a></p>
<p>A small demo for this effect can be found in this <a href="https://github.com/adelizer/kaggle-sandbox/blob/master/drafts/dropout_bn.ipynb" rel="noreferrer">notebook</a>.</p> | 2019-11-22 20:56:08.197000+00:00 | 2020-07-23 09:05:48.257000+00:00 | 2020-07-23 09:05:48.257000+00:00 | null | 39,691,902 | <p><em>The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow.</em></p>
<p>When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried about the ordering?</p>
<p>It seems possible that if I use dropout followed immediately by batch normalization there might be trouble. For example, if the shift in the batch normalization trains to the larger scale numbers of the training outputs, but then that same shift is applied to the smaller (due to the compensation for having more outputs) scale numbers without dropout during testing, then that shift may be off. Does the TensorFlow batch normalization layer automatically compensate for this? Or does this not happen for some reason I'm missing?</p>
<p>Also, are there other pitfalls to look out for in when using these two together? For example, assuming I'm using them in the correct order in regards to the above (assuming there <em>is</em> a correct order), could there be trouble with using both batch normalization and dropout on multiple successive layers? I don't immediately see a problem with that, but I might be missing something.</p>
<p>Thank you much!</p>
<p><strong>UPDATE:</strong></p>
<p>An experimental test <em>seems</em> to suggest that ordering <em>does</em> matter. I ran the same network twice with only the batch norm and dropout reverse. When the dropout is before the batch norm, validation loss seems to be going up as training loss is going down. They're both going down in the other case. But in my case the movements are slow, so things may change after more training and it's just a single test. A more definitive and informed answer would still be appreciated.</p> | 2016-09-25 21:12:23.370000+00:00 | 2022-06-16 12:41:45.240000+00:00 | 2019-02-03 21:14:55.993000+00:00 | python|neural-network|tensorflow|conv-neural-network | ['https://arxiv.org/abs/1801.05134', 'https://i.stack.imgur.com/nptD6.png', 'https://github.com/adelizer/kaggle-sandbox/blob/master/drafts/dropout_bn.ipynb'] | 3 |
53,881,090 | <h2>Usually, Just drop the <code>Dropout</code>(when you have <code>BN</code>):</h2>
<ul>
<li>"BN eliminates the need for <code>Dropout</code> in some cases cause BN provides similar regularization benefits as Dropout intuitively"</li>
<li>"Architectures like ResNet, DenseNet, etc. not using <code>Dropout</code> </li>
</ul>
<p>For more details, refer to this paper [<a href="https://arxiv.org/pdf/1801.05134.pdf" rel="noreferrer">Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift</a>] as already mentioned by @Haramoz in the comments.</p> | 2018-12-21 07:58:33.243000+00:00 | 2018-12-21 07:58:33.243000+00:00 | null | null | 39,691,902 | <p><em>The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow.</em></p>
<p>When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried about the ordering?</p>
<p>It seems possible that if I use dropout followed immediately by batch normalization there might be trouble. For example, if the shift in the batch normalization trains to the larger scale numbers of the training outputs, but then that same shift is applied to the smaller (due to the compensation for having more outputs) scale numbers without dropout during testing, then that shift may be off. Does the TensorFlow batch normalization layer automatically compensate for this? Or does this not happen for some reason I'm missing?</p>
<p>Also, are there other pitfalls to look out for in when using these two together? For example, assuming I'm using them in the correct order in regards to the above (assuming there <em>is</em> a correct order), could there be trouble with using both batch normalization and dropout on multiple successive layers? I don't immediately see a problem with that, but I might be missing something.</p>
<p>Thank you much!</p>
<p><strong>UPDATE:</strong></p>
<p>An experimental test <em>seems</em> to suggest that ordering <em>does</em> matter. I ran the same network twice with only the batch norm and dropout reverse. When the dropout is before the batch norm, validation loss seems to be going up as training loss is going down. They're both going down in the other case. But in my case the movements are slow, so things may change after more training and it's just a single test. A more definitive and informed answer would still be appreciated.</p> | 2016-09-25 21:12:23.370000+00:00 | 2022-06-16 12:41:45.240000+00:00 | 2019-02-03 21:14:55.993000+00:00 | python|neural-network|tensorflow|conv-neural-network | ['https://arxiv.org/pdf/1801.05134.pdf'] | 1 |
40,295,999 | <p>In the <a href="https://arxiv.org/pdf/1502.03167.pdf" rel="noreferrer">Ioffe and Szegedy 2015</a>, the authors state that "we would like to ensure that for any parameter values, the network always produces activations with the desired distribution". So the Batch Normalization Layer is actually inserted right after a Conv Layer/Fully Connected Layer, but before feeding into ReLu (or any other kinds of) activation. See <a href="https://www.youtube.com/watch?v=jhUZ800C650&index=5&list=PLLvH2FwAQhnpj1WEB-jHmPuUeQ8mX-XXG" rel="noreferrer">this video</a> at around time 53 min for more details.</p>
<p>As far as dropout goes, I believe dropout is applied after activation layer. In the <a href="https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf" rel="noreferrer">dropout paper</a> figure 3b, the dropout factor/probability matrix r(l) for hidden layer l is applied to it on y(l), where y(l) is the result after applying activation function f. </p>
<p>So in summary, the order of using batch normalization and dropout is:</p>
<p>-> CONV/FC -> BatchNorm -> ReLu(or other activation) -> Dropout -> CONV/FC -></p> | 2016-10-27 23:59:30.593000+00:00 | 2016-10-27 23:59:30.593000+00:00 | null | null | 39,691,902 | <p><em>The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow.</em></p>
<p>When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried about the ordering?</p>
<p>It seems possible that if I use dropout followed immediately by batch normalization there might be trouble. For example, if the shift in the batch normalization trains to the larger scale numbers of the training outputs, but then that same shift is applied to the smaller (due to the compensation for having more outputs) scale numbers without dropout during testing, then that shift may be off. Does the TensorFlow batch normalization layer automatically compensate for this? Or does this not happen for some reason I'm missing?</p>
<p>Also, are there other pitfalls to look out for in when using these two together? For example, assuming I'm using them in the correct order in regards to the above (assuming there <em>is</em> a correct order), could there be trouble with using both batch normalization and dropout on multiple successive layers? I don't immediately see a problem with that, but I might be missing something.</p>
<p>Thank you much!</p>
<p><strong>UPDATE:</strong></p>
<p>An experimental test <em>seems</em> to suggest that ordering <em>does</em> matter. I ran the same network twice with only the batch norm and dropout reverse. When the dropout is before the batch norm, validation loss seems to be going up as training loss is going down. They're both going down in the other case. But in my case the movements are slow, so things may change after more training and it's just a single test. A more definitive and informed answer would still be appreciated.</p> | 2016-09-25 21:12:23.370000+00:00 | 2022-06-16 12:41:45.240000+00:00 | 2019-02-03 21:14:55.993000+00:00 | python|neural-network|tensorflow|conv-neural-network | ['https://arxiv.org/pdf/1502.03167.pdf', 'https://www.youtube.com/watch?v=jhUZ800C650&index=5&list=PLLvH2FwAQhnpj1WEB-jHmPuUeQ8mX-XXG', 'https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf'] | 3 |
48,778,320 | <p>This problem is a research area by itself (and part of my PhD thesis...)
The best solution usually depends on your mathematical definition of "cluster" or "community".
For example, you can minimize the number inter-cluster edges, which is called the <a href="https://en.wikipedia.org/wiki/Graph_partition" rel="nofollow noreferrer">graph partition problem</a>. </p>
<p>Fortunato wrote a nice review paper on this topic:
<a href="https://arxiv.org/pdf/0906.0612" rel="nofollow noreferrer">https://arxiv.org/pdf/0906.0612</a></p>
<p>My personal favorite, besides our own method, is the simulated annealing. </p> | 2018-02-14 01:25:51.837000+00:00 | 2018-02-14 01:25:51.837000+00:00 | null | null | 48,778,259 | <p>the best way I can explain what I'm looking for is using this picture:</p>
<p><a href="https://i.stack.imgur.com/M6cQY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M6cQY.png" alt="enter image description here"></a></p>
<p>Obviously the visual aid makes it a lot easier for us to group these graphs but I would also think that finding dense sub-graphs should be a solvable problem using an algorithm. I tried MCL algorithm due to its popularity but it wouldn't work fine because it doesn't, seemingly at least, allow directional edges. I attempted to weight the edges differently but that didn't help the clustering process either. I'd like to find dense spots in the graph and I do have a way to verify that a given cluster is viable, there are cases where some elements just can't be together if that helps.</p>
<p>The output of that would be:</p>
<p>Cluster 0: A, B, C</p>
<p>Cluster 1: D, E, F, G</p>
<p>In this case if D is a suspicious element, using a different approach I can figure out which cluster in belongs to.</p> | 2018-02-14 01:17:43.903000+00:00 | 2018-02-14 01:25:51.837000+00:00 | null | python|graph|cluster-analysis | ['https://en.wikipedia.org/wiki/Graph_partition', 'https://arxiv.org/pdf/0906.0612'] | 2 |
66,721,006 | <p>This doesn't look like a checkerboard artifact honestly. Also I don't think discriminator would be the problem, it's usually about image restoration (generator or decoder).</p>
<p>Took a quick look at the MUNIT and what they use in <code>Decoder</code> is <code>torch.nn.Upsample</code> with nearest neighbor upsampling (exact code line <a href="https://github.com/NVlabs/MUNIT/blob/master/networks.py#L232" rel="nofollow noreferrer">here</a>).</p>
<p>You may try to use <code>torch.nn.Conv2d</code> followed by <a href="https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html" rel="nofollow noreferrer"><code>torch.nn.PixelShuffle</code></a>, something like this:</p>
<pre><code>import torch
in_channels = 32
upscale_factor = 2
out_channels = 16
upsampling = torch.nn.Sequential(
torch.nn.Conv2d(
in_channels,
out_channels * upscale_factor * upscale_factor,
kernel_size=3,
padding=1,
),
torch.nn.PixelShuffle(upscale_factor),
)
image = torch.randn(1, 32, 16, 16)
upsampling(image).shape # [1, 16, 32, 32]
</code></pre>
<p>This allows neural network to learn how to upsample the image instead of merely using <code>torch.nn.Upsample</code> which the network has no control over (and using below trick it should also be free of checkerboard artifacts).</p>
<p>Additionally, ICNR initialization for <code>Conv2d</code> should also help (possible implementation <a href="https://github.com/pytorch/pytorch/pull/5429/commits/4fea7cb83366a95b2eb9f64a9161b35d0e412f66" rel="nofollow noreferrer">here</a> or <a href="https://github.com/szymonmaszke/torchlayers/blob/master/torchlayers/upsample.py#L116" rel="nofollow noreferrer">here</a>). This init scheme initializes weights to act similar to nearest neighbor upsampling at the beginning (research paper <a href="https://arxiv.org/pdf/1707.02937.pdf" rel="nofollow noreferrer">here</a>).</p> | 2021-03-20 11:25:33.293000+00:00 | 2021-03-20 11:25:33.293000+00:00 | null | null | 66,720,639 | <p>I know that in Convolution layers the kernel size needs to be a multiplication of stride or else it will produce artefacts in gradient calculations like the checkerboard problem.
Now does it also work like that in Pooling layers? I read somewhere that max pooling can also cause problems like that. Take this line in the discriminator for example:</p>
<pre><code> self.downsample = nn.AvgPool2d(3, stride=2, padding=1, count_include_pad=False)
</code></pre>
<p>I have a model (MUNIT) with it, and this is the image it produced:</p>
<p><a href="https://i.stack.imgur.com/sxrEG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sxrEG.png" alt="enter image description here" /></a></p>
<p>It looks like the checkerboard problem, or at least a gradient problem but I checked my Convolution layers and didn't found the error described above. They all are of size 4 with stride 2 or an uneven size with stride of 1.</p> | 2021-03-20 10:43:24.497000+00:00 | 2021-03-20 11:25:33.293000+00:00 | null | python|deep-learning|pytorch|generative-adversarial-network|medical-imaging | ['https://github.com/NVlabs/MUNIT/blob/master/networks.py#L232', 'https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html', 'https://github.com/pytorch/pytorch/pull/5429/commits/4fea7cb83366a95b2eb9f64a9161b35d0e412f66', 'https://github.com/szymonmaszke/torchlayers/blob/master/torchlayers/upsample.py#L116', 'https://arxiv.org/pdf/1707.02937.pdf'] | 5 |
36,511,251 | <p>Generally speaking, CYK is a maximum-likelihood parse tree. It never gives you the best performance because of this reason and the fact that it ignores contextual information when assigns the probabilities. You need to modify it to consider more contexts, or integrate it into something else. For example, Left-Corner parser can use a CYK procedure, inside. So the answer to your question is, LC is more powerful than CYK, though it's computationally more expensive. Have a look at Mark Johnson's <a href="http://arxiv.org/pdf/cs/0008017.pdf" rel="nofollow">paper</a>.</p> | 2016-04-09 00:12:45.310000+00:00 | 2016-04-09 07:50:48+00:00 | 2016-04-09 07:50:48+00:00 | null | 36,502,897 | <p>which one is best for parsing between Left corner Parsing algorithm and CYK parsing algorithm ? and Why ?</p> | 2016-04-08 14:58:36.923000+00:00 | 2016-04-09 07:50:48+00:00 | null | nlp|stanford-nlp | ['http://arxiv.org/pdf/cs/0008017.pdf'] | 1 |
50,311,864 | <h2>Theoretically...</h2>
<blockquote>
<p>Can I say that, the feature maps for earlier layers from conv1 to conv4_2 carry only partial features of my object and from conv5_2 to conv5_3, they carry the features of almost the whole object. Is my consideration true?</p>
</blockquote>
<p>Yes! You even calculated yourself the <strong>receptive field</strong> (in the case of CNN, is the pixels in the image that can theoretically affect the value of one cell of the feature map)!</p>
<blockquote>
<p>But at conv5_3, my output_size is 31 x 31 only, so I can't visualize how it represents the whole object in the image, but every pixel in that conv5_3 layer represents 196 x 196 size of the original 500 x 500 image. Is my consideration true?</p>
</blockquote>
<p>Yes! But don't forget that although the feature map size is only 31x31, the stride of your features is 16. So each cell of the <code>conv5_3</code> feature map represents a region 196x196 in the image (keep in mind that if the "input window" does not fit inside the image, the rest of the "input window" will be black e.g. filled with zero), and have stride 16x16 between each other. So that 31x31 feature map still fully capture the image (just that the stride is huge).</p>
<hr />
<h2>Effectively...</h2>
<p>Okay, above we were talking about the <strong>theoretical receptive field</strong>, that is, the pixels in the image that have a probability larger than 0 of affecting one cell (or pixel) in the feature map (31x31, in that case). However, in practice, it heavily depends on the weights of your convolution kernels.</p>
<p>Take a look at <a href="http://blog.christianperone.com/2017/11/the-effective-receptive-field-on-cnns/" rel="nofollow noreferrer">this post</a> about the <strong>effective receptive field</strong> (ERF) of CNNs (or, if you have plenty of time, go straight to the <a href="https://arxiv.org/pdf/1701.04128.pdf" rel="nofollow noreferrer">original paper</a>).</p>
<blockquote>
<p>In theory, when you stack more layers you can increase your receptive field linearly, however, in practice, things aren’t simple as we thought: not all pixels in the receptive field contribute equally to the output unit’s response.</p>
<p>What is actually more even interesting is that this receptive field is dynamic and changes during the training. The impact of this on the backpropagation is that the central pixels will have a larger gradient magnitude when compared to the border pixels.</p>
</blockquote>
<p>Here are some figures from the papers that represents the ERF:</p>
<p><a href="https://i.stack.imgur.com/8tgpm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8tgpm.png" alt="Here are some images of ERF" /></a>
<a href="https://i.stack.imgur.com/h8wn7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h8wn7.png" alt="enter image description here" /></a></p>
<p>As you can see, the receptive field does not cover the whole patch at all! So don't be surprised if the ERF of the <code>conv5_3</code> is much smaller than 196x196.</p>
<hr />
<h2>Also...</h2>
<p>Apart from the size of receptive field, which basically says "this cell on feature map compresses valuable data from this patch of the image", you also need these features to be expressive enough. So, take a look at <a href="https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html" rel="nofollow noreferrer">this post</a> or search "vgg visualization" on google to have some intuitions on the <strong>expressiveness of the features</strong> itself.</p> | 2018-05-13 01:01:28.853000+00:00 | 2018-05-13 01:01:28.853000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 50,148,376 | <p>I can calculate the receptive field size of 500 x 500 input image for VGGNet.</p>
<p>The receptive field sizes are as follow.</p>
<pre><code>Layer Name = conv1, Output size = 500, Stride = 1, RF size = 3
Layer Name = relu1_1, Output size = 500, Stride = 1, RF size = 3
Layer Name = conv1_2, Output size = 500, Stride = 1, RF size = 5
Layer Name = relu1_2, Output size = 500, Stride = 1, RF size = 5
Layer Name = pool1, Output size = 250, Stride = 2, RF size = 6
Layer Name = conv2_1, Output size = 250, Stride = 2, RF size = 10
Layer Name = relu2_1, Output size = 250, Stride = 2, RF size = 10
Layer Name = conv2_2, Output size = 250, Stride = 2, RF size = 14
Layer Name = relu2_2, Output size = 250, Stride = 2, RF size = 14
Layer Name = pool2, Output size = 125, Stride = 4, RF size = 16
Layer Name = conv3_1, Output size = 125, Stride = 4, RF size = 24
Layer Name = relu3_1, Output size = 125, Stride = 4, RF size = 24
Layer Name = conv3_2, Output size = 125, Stride = 4, RF size = 32
Layer Name = relu3_2, Output size = 125, Stride = 4, RF size = 32
Layer Name = conv3_3, Output size = 125, Stride = 4, RF size = 40
Layer Name = relu3_3, Output size = 125, Stride = 4, RF size = 40
Layer Name = pool3, Output size = 62, Stride = 8, RF size = 44
Layer Name = conv4_1, Output size = 62, Stride = 8, RF size = 60
Layer Name = relu4_1, Output size = 62, Stride = 8, RF size = 60
Layer Name = conv4_2, Output size = 62, Stride = 8, RF size = 76
Layer Name = relu4_2, Output size = 62, Stride = 8, RF size = 76
Layer Name = conv4_3, Output size = 62, Stride = 8, RF size = 92
Layer Name = relu4_3, Output size = 62, Stride = 8, RF size = 92
Layer Name = pool4, Output size = 31, Stride = 16, RF size = 100
Layer Name = conv5_1, Output size = 31, Stride = 16, RF size = 132
Layer Name = relu5_1, Output size = 31, Stride = 16, RF size = 132
Layer Name = conv5_2, Output size = 31, Stride = 16, RF size = 164
Layer Name = relu5_2, Output size = 31, Stride = 16, RF size = 164
Layer Name = conv5_3, Output size = 31, Stride = 16, RF size = 196
Layer Name = relu5_3, Output size = 31, Stride = 16, RF size = 196
</code></pre>
<p>I look at only upto conv5_3.</p>
<p>For example, if my object size is 150 x 150 and my image size is 500 x 500.</p>
<p>Can I say that, the feature maps for earlier layers from conv1 to conv4_2 carry only partial features of my object and from conv5_2 to conv5_3, they carry the features of almost the whole object. </p>
<p>Is my consideration true?</p>
<p>But at conv5_3, my output_size is 31 x 31 only, so I can't visualize how it represents the whole object in the image, but every pixel in that conv5_3 layer represents 196 x 196 size of the original 500 x 500 image.</p>
<p>Is my consideration true?</p> | 2018-05-03 06:34:39.270000+00:00 | 2018-05-13 01:01:28.853000+00:00 | 2018-05-03 06:58:13.377000+00:00 | deep-learning|caffe|conv-neural-network | ['http://blog.christianperone.com/2017/11/the-effective-receptive-field-on-cnns/', 'https://arxiv.org/pdf/1701.04128.pdf', 'https://i.stack.imgur.com/8tgpm.png', 'https://i.stack.imgur.com/h8wn7.png', 'https://blog.keras.io/how-convolutional-neural-networks-see-the-world.html'] | 5 |
56,131,925 | <p>You may consider a One-fits-all model or Seq2Seq as e.g. this <a href="http://proceedings.mlr.press/v89/mariet19a.html" rel="nofollow noreferrer">Google</a> paper suggests. The approach works as follows:</p>
<ul>
<li>Let us assume that you wanna make a 1-day ahead forecast (24 values) and you are using last 7 days (7 * 24 = 168 values) as input. </li>
<li><p>In time series analysis data is time dependent, such that you need a validation strategy that considers this time dependence, e.g. by <a href="https://robjhyndman.com/hyndsight/rolling-forecasts/" rel="nofollow noreferrer">rolling forecast</a> approach. Separate hold-out data for testing your final trained model. </p></li>
<li><p>In the first step you will generate out of your many time series 168 + 24 slices (see the Google paper for an image). The x input will have length 168 and the y input 24. Use all of your generated slices for training the LSTM/GRU network and finally do prediction on your hold-out set.</p></li>
</ul>
<p>Good papers on this issue:</p>
<ul>
<li><a href="http://proceedings.mlr.press/v89/mariet19a.html" rel="nofollow noreferrer">Foundations of Sequence-to-Sequence Modeling for Time Series</a></li>
<li><a href="https://arxiv.org/abs/1709.01907" rel="nofollow noreferrer">Deep and Confident Prediction for Time Series at Uber</a></li>
<li>more</li>
</ul>
<p>Kaggle Winning Solution</p>
<ul>
<li><a href="https://github.com/Arturus/kaggle-web-traffic" rel="nofollow noreferrer">Kaggle Web Traffic Time Series Forecasting</a></li>
</ul>
<p>List is not comprehensive, but you can use it as a starting point. </p> | 2019-05-14 13:41:53.200000+00:00 | 2019-05-14 13:41:53.200000+00:00 | null | null | 56,088,899 | <p>I have data for hundreds of devices(pardon me, I am not specifying much detail about device and data recorded for devices). For each device, data is recorded per hour basis. </p>
<p>Data recorded are of 25 dimensions. </p>
<p>I have few prediction tasks </p>
<blockquote>
<p>time series forecasting</p>
</blockquote>
<p>where I am using LSTM. As because I have hundreds of devices, and each device is a time series(multivariate data), so all total my data is a Multiple time series with multivariate data. </p>
<p>To deal with multiple time series - my first approach is to concatenate data one after another and treat them as one time series (it can be both uni variate or multi variate) and apply LSTM and train my LSTM model. </p>
<p>But by this above approach(by concatenating time series data), actually I am loosing my time property of my data, so I need a better approach. </p>
<p>Please suggest some ideas, or blog posts. </p>
<p><em>Kindly don't confuse with Multiple time series with Multi variate time series data.</em></p> | 2019-05-11 09:18:02.997000+00:00 | 2019-05-14 13:41:53.200000+00:00 | 2019-05-13 21:03:55.363000+00:00 | tensorflow|time-series|lstm | ['http://proceedings.mlr.press/v89/mariet19a.html', 'https://robjhyndman.com/hyndsight/rolling-forecasts/', 'http://proceedings.mlr.press/v89/mariet19a.html', 'https://arxiv.org/abs/1709.01907', 'https://github.com/Arturus/kaggle-web-traffic'] | 5 |
69,845,489 | <p>The paper <a href="https://arxiv.org/pdf/1802.09478.pdf" rel="nofollow noreferrer">In-database connected component analysis</a> describes a SQL-based algorithm (using quite a few tables to store the intermediate results. The paper evaluates the algorithm in the Apache HAWQ DBMS but it seems to be portable to PostgreSQL.</p> | 2021-11-04 20:32:57.423000+00:00 | 2021-11-04 20:32:57.423000+00:00 | null | null | 33,465,859 | <p>I have a graph in my <strong>PostgreSQL</strong> database, for the sake of example let's define it so:</p>
<pre><code>CREATE TABLE nodes (node_id INTEGER);
CREATE TABLE roads (road_id INTEGER, nodes INTEGER[]);
INSERT INTO nodes VALUES (1), (2), (3), (4), (5);
INSERT INTO roads VALUES (1, {1, 2}), (2, {3, 4}));
</code></pre>
<p>I want to create SQL query that returns the number of <a href="https://en.wikipedia.org/wiki/Connected_component_(graph_theory)" rel="nofollow">connected components</a> of the graph, in this example the number is <strong>3</strong>, because nodes 1/2 are connected, 3/4 as well, while 5 is not connected to anything.</p>
<p>I tried searching for <a href="https://en.wikipedia.org/wiki/Disjoint-set_data_structure" rel="nofollow">find&union</a> implementations in SQL but to no avail, I then turned to <a href="http://www.postgresql.org/docs/8.4/static/queries-with.html" rel="nofollow">CTEs</a> but I can't do it on my own, I was thinking of something like this:</p>
<pre><code>WITH RECURSIVE cc(iterator_id, node_id, rank, iterator) AS
(
SELECT row_number() OVER(), n.node_id, row_number() OVER (), 1 FROM nodes AS n
UNION ALL
# Something here that does the magic
)
SELECT
COUNT(DISTINCT rank) AS no_of_cc
FROM
cc,
(SELECT COUNT(*) FROM nodes) AS last_iterator_id
WHERE iterator = last_iterator_id;
</code></pre>
<p>where in each iteration we update the ranks of rows whose iterator_id <= iterator. We iterate until <code>iterator</code> is equal to the biggest <code>iterator_id</code>
but I can't think of the recursive part.</p>
<p>Can you help me find the number of connected components?</p> | 2015-11-01 18:47:59.603000+00:00 | 2021-11-04 20:32:57.423000+00:00 | 2015-11-02 00:10:42.110000+00:00 | sql|postgresql|graph|common-table-expression|recursive-query | ['https://arxiv.org/pdf/1802.09478.pdf'] | 1 |
66,200,947 | <p>You may try to iterate the community detection algorithm (Louvain or other) by running it on the too large communities you first find. This will partition them into smaller ones.</p>
<p>Notice also that Louvain and other community detection algorithms generally do not produce <em>the best</em> partition, but <em>a good</em> partition with respect to a given quality function. In most cases, finding the best partition is NP-hard.</p>
<p>With this in mind, one may include a <em>scale</em> parameter into the quality function, and detect relevant community at different scales: <a href="https://arxiv.org/abs/cs/0608050" rel="nofollow noreferrer">Post-Processing Hierarchical Community Structures: Quality Improvements and Multi-scale View</a></p> | 2021-02-14 23:00:38.707000+00:00 | 2021-02-14 23:00:38.707000+00:00 | null | null | 66,038,698 | <p>I am building a massive network, that is filled with isolated nodes, but also rather large clusters as well. I have used Louvain's algorithm to achieve the best partition - however some communities are too large. I was curious what algorithms (preferably with Python frameworks) have similar run time to Louvain but penalize too large of communities while achieving ideal modularity?</p> | 2021-02-04 02:48:33.693000+00:00 | 2021-02-14 23:00:38.707000+00:00 | null | python|machine-learning|graph|networkx|modularity | ['https://arxiv.org/abs/cs/0608050'] | 1 |
52,978,216 | <p>fields.InputDataFields.groundtruth_weights is <em>most likely</em> a weight that gets multiplied with the loss. See 3.1 in <a href="https://arxiv.org/pdf/1708.02002.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1708.02002.pdf</a></p> | 2018-10-24 21:34:34.650000+00:00 | 2018-10-24 21:34:34.650000+00:00 | null | null | 49,638,765 | <p>I was digging through the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/data_decoders/tf_example_decoder.py" rel="nofollow noreferrer">TfExampleDecoder</a> and saw some fields that don't appear to be documented anywhere. Starting in line 207:</p>
<p><code>
fields.InputDataFields.groundtruth_group_of: (slim_example_decoder.Tensor('image/object/group_of')),
fields.InputDataFields.groundtruth_weights: (slim_example_decoder.Tensor('image/object/weight')),
</code></p>
<p>Is there documentation for the purpose that these serve?</p> | 2018-04-03 20:36:16.273000+00:00 | 2018-10-24 21:34:34.650000+00:00 | null | tensorflow|object-detection | ['https://arxiv.org/pdf/1708.02002.pdf'] | 1 |
42,276,427 | <p>Actually in a original <a href="https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf" rel="nofollow noreferrer">Inception</a> paper the autors mention as a data preprocessor the function you provided (one which is zero-centering all channels and resizes it to <code>[-1, 1]</code> interval). As in <a href="https://arxiv.org/abs/1512.00567" rel="nofollow noreferrer">InceptionV3</a> paper no new data transformation is provided I think that you may assume that you should use the following function:</p>
<pre><code>def preprocess_input(x):
x /= 255.
x -= 0.5
x *= 2.
return x
</code></pre> | 2017-02-16 14:14:57.067000+00:00 | 2017-02-16 14:14:57.067000+00:00 | null | null | 42,275,815 | <pre><code>def preprocess_input(x):
x /= 255.
x -= 0.5
x *= 2.
return x
</code></pre>
<p> I am using <strong>keras</strong> inception_v3 imagenet pretrained model(<a href="https://github.com/fchollet/keras/blob/master/keras/applications/inception_v3.py" rel="nofollow noreferrer">inception_v3.py</a>) to finetune on my own dataset.<br>
 When I want to <strong>subtract the imagenet mean value [123.68, 116.779, 103.939] and reverse axis RGB to BGR</strong> as we often do, I find that the author provided a _preprocess_input()_ function at the end.I am confused about this. </p>
<p>  Should I use the provided function <em>preprocess_input()</em> <strong>or</strong> subtract mean value and reverse axis as usual?<br>
  Thanks lot.</p> | 2017-02-16 13:49:37.900000+00:00 | 2018-05-18 11:40:13.927000+00:00 | 2018-05-18 11:40:13.927000+00:00 | tensorflow|neural-network|keras|deep-learning|theano | ['https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf', 'https://arxiv.org/abs/1512.00567'] | 2 |
67,779,893 | <p>What word embeddings provide is the ability to trace analogies, an example <a href="https://krishansubudhi.github.io/deeplearning/2019/06/13/WordEmbeddings.html" rel="nofollow noreferrer">here</a>, a paper <a href="https://arxiv.org/pdf/1901.09813.pdf" rel="nofollow noreferrer">here</a></p>
<p><code>v('king') - v('man') ~ v('queen') - v('woman')</code></p>
<p>You can even visualize a projection of these vectors in a 2D plot, a great interactive example where you can explore not only gender analogies <a href="https://lamyiowce.github.io/word2viz/" rel="nofollow noreferrer">here</a></p>
<p>The strategy to find bias is, you hypothesize about some possible bias in the training data, then you look for those analogies that would exist in that biased view but not in a fair/unbiased view.</p>
<p><a href="https://i.stack.imgur.com/nSmlh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nSmlh.png" alt="enter image description here" /></a></p> | 2021-05-31 20:30:13.550000+00:00 | 2021-05-31 20:30:13.550000+00:00 | null | null | 67,777,097 | <p>I have <code>glove.twitter.27B.200d.txt</code> word embeddings. These embeddings in <code>GloVe</code> format. I transfered it to <code>w2v</code> format using this code:</p>
<pre><code>model = KeyedVectors.load_word2vec_format(
"data/glove.twitter.27B.200d.w2v.txt", binary=False
)
</code></pre>
<p><code>len(model.vocab) == 1193514</code></p>
<p>There is a gender bias in this word embeddings:</p>
<p><code>model.similarity("man", "kitchen") == 0.32785824</code></p>
<p><code>model.similarity("woman", "kitchen") == 0.40180725</code></p>
<p>I want to find a gender bias direction in this word embeddings, but not sure how.</p> | 2021-05-31 16:05:18.773000+00:00 | 2021-05-31 20:30:13.550000+00:00 | 2021-05-31 20:24:23.627000+00:00 | python|nlp|linear-algebra|word2vec|word-embedding | ['https://krishansubudhi.github.io/deeplearning/2019/06/13/WordEmbeddings.html', 'https://arxiv.org/pdf/1901.09813.pdf', 'https://lamyiowce.github.io/word2viz/', 'https://i.stack.imgur.com/nSmlh.png'] | 4 |
67,779,754 | <p>You can use <a href="https://arxiv.org/pdf/1607.06520.pdf" rel="nofollow noreferrer">this paper</a> (Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings) method:</p>
<blockquote>
<p>To more robustly estimate bias, we shall aggregate
across multiple paired comparisons. By combining several directions, such as
<a href="https://i.stack.imgur.com/CibO3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CibO3.png" alt="enter image description here" /></a></p>
<p>identify a gender direction <code>g ∈ R^d</code> that largely captures gender in the embedding. This direction helps us to quantify direct and indirect biases in words and associations.</p>
</blockquote>
<p>So, First, generating some pre-defined pair-differences utilized to generate a gender subspace. Then:</p>
<blockquote>
<p>To identify the gender subspace, we took the ten gender pair difference vectors and computed its principal
components (PCs). As Figure 6 shows, there is a single direction that explains the majority of variance
in these vectors. The first eigenvalue is significantly larger than the rest. Note that, from the randomness
in a finite sample of ten noisy vectors, one expects a decrease in eigenvalues. However, as also illustrated
in 6, the decrease one observes due to random sampling is much more gradual and uniform. Therefore we
hypothesize that the top PC, denoted by the unit vector g, captures the gender subspace. In general, the
gender subspace could be higher dimensional and all of our analysis and algorithms (described below) work
with general subspaces.</p>
<p><a href="https://i.stack.imgur.com/NMxnq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NMxnq.png" alt="enter image description here" /></a></p>
</blockquote>
<p>The generated vector <code>g</code> from PC analysis of the generated gender subspace hows the gender bias direction. Finally, to measure the <strong>DirectBias</strong>:</p>
<blockquote>
<p>To measure direct bias, we first identify words that should be gender-neutral for the application in question.
How to generate this set of gender-neutral words is described in Section 7. Given the gender neutral words,
denoted by N, and the gender direction learned from above, g, we define the direct gender bias of an embedding to be:
<a href="https://i.stack.imgur.com/CaVVj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CaVVj.png" alt="enter image description here" /></a></p>
</blockquote> | 2021-05-31 20:16:27.103000+00:00 | 2021-05-31 20:23:09.823000+00:00 | 2021-05-31 20:23:09.823000+00:00 | null | 67,777,097 | <p>I have <code>glove.twitter.27B.200d.txt</code> word embeddings. These embeddings in <code>GloVe</code> format. I transfered it to <code>w2v</code> format using this code:</p>
<pre><code>model = KeyedVectors.load_word2vec_format(
"data/glove.twitter.27B.200d.w2v.txt", binary=False
)
</code></pre>
<p><code>len(model.vocab) == 1193514</code></p>
<p>There is a gender bias in this word embeddings:</p>
<p><code>model.similarity("man", "kitchen") == 0.32785824</code></p>
<p><code>model.similarity("woman", "kitchen") == 0.40180725</code></p>
<p>I want to find a gender bias direction in this word embeddings, but not sure how.</p> | 2021-05-31 16:05:18.773000+00:00 | 2021-05-31 20:30:13.550000+00:00 | 2021-05-31 20:24:23.627000+00:00 | python|nlp|linear-algebra|word2vec|word-embedding | ['https://arxiv.org/pdf/1607.06520.pdf', 'https://i.stack.imgur.com/CibO3.png', 'https://i.stack.imgur.com/NMxnq.png', 'https://i.stack.imgur.com/CaVVj.png'] | 4 |
1,173,371 | <p>Predicting missing values is generally considered to be part of data cleansing phase which needs to be done before the data is mined or analyzed further. This is quite prominent in real world data.</p>
<p>Please have a look at this algorithm <a href="http://arxiv.org/abs/math/0701152" rel="nofollow noreferrer">http://arxiv.org/abs/math/0701152</a></p>
<p>Currently Microsoft SQL Server Analysis Services 2008 also comes with algorithms like these <a href="http://technet.microsoft.com/en-us/library/ms175312.aspx" rel="nofollow noreferrer">http://technet.microsoft.com/en-us/library/ms175312.aspx</a> which help in predictive modelling of attributes.</p>
<p>cheers</p> | 2009-07-23 17:45:11.783000+00:00 | 2009-07-23 17:45:11.783000+00:00 | null | null | 1,173,239 | <p>I have a database, consisting of a whole bunch of records (around 600,000) where some of the records have certain fields missing. My goal is to find a way to predict what the missing data values should be (so I can fill them in) based on the existing data. </p>
<p>One option I am looking at is clustering - i.e. representing the records that are all complete as points in some space, looking for clusters of points, and then when given a record with missing data values try to find out if there are any clusters that could belong in that are consistent with the existing data values. However this may not be possible because some of the data fields are on a nominal scale (e.g. color) and thus can't be put in order.</p>
<p>Another idea I had is to create some sort of probabilistic model that would predict the data, train it on the existing data, and then use it to extrapolate.</p>
<p>What algorithms are available for doing the above, and is there any freely available software that implements those algorithms (This software is going to be in c# by the way).</p> | 2009-07-23 17:20:10.197000+00:00 | 2009-09-17 02:14:14.033000+00:00 | null | algorithm|math|statistics | ['http://arxiv.org/abs/math/0701152', 'http://technet.microsoft.com/en-us/library/ms175312.aspx'] | 2 |
51,686,860 | <p>The link you posted to a 2012 paper (<a href="https://arxiv.org/pdf/1203.3442.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1203.3442.pdf</a>) seems to describe a rather interesting DCT algorithm: It has low computational complexity (32*17 + 255 multiplies for a 16*16 block and 16*5 + 63 for size 8*8) but also a very regular structure, which makes it easy to synthesise a double-sized variant.</p>
<p>When implementing these things, one should mostly just focus on the butterfly graph: Read from left to right to implement forward (Type II) DCT and from right to left to implement inverse (Type III) DCT. Read text and formulas only when needed to interpret any special symbols in the graph.</p>
<p>That being said, I tried to implement the 8-point DCT II sub-module using the graph from the paper. In this case, the 8 outputs, starting from top, should be re-defined as X0, X4, X2, X6, X1, X3, X5, X7. The first five seem to be within a constant factor of reference DCT output, but I don't seem to get the bottom 3 right.</p>
<p>Here's my code that tries to calculate the 8-point transform:</p>
<pre><code>void fDCT2bb2(float* data, const float factor) {
float a = data[0], b = data[1];
a -= b;
b *= factor;
data[0] = a + b;
data[1] = b - a;
}
void fDCT2bb4(float* data, const float factor) {
float a[2] = {data[0], data[1]};
float b[2] = {data[2], data[3]};
a[0] -= b[1];
a[1] -= b[0];
b[0] *= factor;
b[1] *= factor;
data[0] = a[0] + b[0];
data[1] = a[1] + b[1];
data[2] = b[0] - a[0];
data[3] = b[1] - a[1];
}
void fDCT8point(const float* input, float* output) {
float a[4] = {
input[0] + input[7],
input[1] + input[6],
input[2] + input[5],
input[3] + input[4]
};
float c = a[0];
a[0] += a[3];
a[3] -= c;
c = a[1];
a[1] += a[2];
a[2] -= c;
c = a[0];
a[0] += a[1];
a[1] -= c;
c = a[2];
a[2] = a[3];
a[3] = c;
fDCT2bb2(&a[2], 1.41421356f);
float b[4] = {
input[7] - input[0],
input[6] - input[1],
input[5] - input[2],
input[4] - input[3]
};
fDCT2bb4(b, 1.41421356f);
fDCT2bb2(b, 1.84775906f);
fDCT2bb2(&b[2], -0.76536686f);
output[0] = a[0];
output[4] = a[1];
output[2] = a[2];
output[6] = a[3];
output[1] = b[0];
output[7] = b[1];
output[5] = b[2];
output[3] = b[3];
}
</code></pre>
<p>Any simple change to the above seems to make the output worse. I may have misinterpreted how to implement a "building block" with 4 inputs and 4 outputs from the rather terse description, but there shouldn't be too many ways to do things as it's only supposed to have 2 multiplies and 6 adds.</p>
<p><strong>Edit</strong>: I got this fixed by using 5pi/8 instead of 3pi/8 in (=-0.765..) unlike the graph and swapping outputs 3 and 7. Apparently this is an 8-point-only thing, so a 16-point transform should be exactly as in the graph.</p>
<p>Anyway, I have also implemented a similar recursive, regular DCT using this paper: <a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.27.3258&rep=rep1&type=pdf" rel="nofollow noreferrer">http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.27.3258&rep=rep1&type=pdf</a></p>
<p>The butterfly graph is on page 8. After implementing the 8-point variant, it's easy to see how to keep doubling the transform size when needed. The 2-D expansion is not very relevant today, because SIMDifying it is difficult.</p> | 2018-08-04 15:07:22.863000+00:00 | 2018-08-04 19:24:32.777000+00:00 | 2018-08-04 19:24:32.777000+00:00 | null | 51,657,741 | <p>I'm trying to find a way to perform a fast 16 point dct2 and dct3 transform.</p>
<p>I found some articles like <a href="https://arxiv.org/pdf/1203.3442.pdf" rel="nofollow noreferrer">this one</a> talking about how to do this in mathematical theory, but I'm novice when it comes to reading complex math equations, so honestly I can't understand it.</p>
<p>I searched online for an implementation of a fast 16 point dct, and I found this <a href="http://www.spiral.net/" rel="nofollow noreferrer">code generator</a> which outputs code based on your desired DCT parameters.</p>
<p>I asked it to generate a 16 point dct2 and dct3 with double precision, however the outputs were not mirror images as the inputs when ran through both equations.
This was my input:</p>
<pre><code>// Before DCT
inputArray[ 0] = 12;
inputArray[ 1] = 12;
inputArray[ 2] = 12;
inputArray[ 3] = 14;
inputArray[ 4] = 8;
inputArray[ 5] = 10;
inputArray[ 6] = 12;
inputArray[ 7] = 12;
inputArray[ 8] = 12;
inputArray[ 9] = 12;
inputArray[10] = 12;
inputArray[11] = 12;
inputArray[12] = 12;
inputArray[13] = 12;
inputArray[14] = 12;
inputArray[15] = 12;
</code></pre>
<p>And this was my output</p>
<pre><code>// After DCT and IDCT
outputArray[ 0] = 184;
outputArray[ 1] = 194;
outputArray[ 2] = 178;
outputArray[ 3] = 198;
outputArray[ 4] = 155;
outputArray[ 5] = 141;
outputArray[ 6] = 164;
outputArray[ 7] = 149;
outputArray[ 8] = 138;
outputArray[ 9] = 121;
outputArray[10] = 107;
outputArray[11] = 90;
outputArray[12] = 74;
outputArray[13] = 55;
outputArray[14] = 37;
outputArray[15] = 19;
</code></pre>
<p>I realized the first 5 or so indexes do equal the inputs when divided by 16, however this trend doesn't continue as you go down.</p>
<p>Is this the expected behavior? Or is there something else I need to do the get a proper conversion? </p>
<p>Also I did find an <a href="https://www.nayuki.io/page/fast-discrete-cosine-transform-algorithms" rel="nofollow noreferrer">8 point dct</a> that works well and gives the proper results, is there anyway to expand that into a fast 16 point dct?</p> | 2018-08-02 15:53:27.897000+00:00 | 2018-08-04 19:24:32.777000+00:00 | null | image|image-processing|compression | ['https://arxiv.org/pdf/1203.3442.pdf', 'http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.27.3258&rep=rep1&type=pdf'] | 2 |
66,674,078 | <p>Hi I just tried testing this repo <a href="https://github.com/grammarly/gector" rel="nofollow noreferrer">GECToR</a> it was able to spot the grammatical errors in a sentence and identifying SVA errors were also there.</p>
<p>And building a sequence tagger model can also help you, as described in this <a href="https://arxiv.org/abs/2005.12592" rel="nofollow noreferrer">paper</a>.</p> | 2021-03-17 13:27:32.887000+00:00 | 2021-03-17 13:33:28.297000+00:00 | 2021-03-17 13:33:28.297000+00:00 | null | 62,465,980 | <p>Is there any machine learning model for identifying grammatical errors in a sentence? Please note that I've already tried BERT which is a classification based model and it is useful to tell us whether a sentence has any errors or not. But what I want is that a model which could identify exactly which word in sentence violates SVA (Subject Verb Agreement) or which causes error in the sentence?</p> | 2020-06-19 08:08:11.093000+00:00 | 2021-03-17 13:33:28.297000+00:00 | null | python|tensorflow|machine-learning|deep-learning|statistics | ['https://github.com/grammarly/gector', 'https://arxiv.org/abs/2005.12592'] | 2 |
12,068,157 | <p>Both packages do the same. LSMR is based on Fong & Saunders algorithm from 2010 (see <a href="http://arxiv.org/abs/1006.0758" rel="noreferrer">paper</a>), and has been introduced in scipy very recently (ie, version 0.10 and earlier won't have it). According to the paper, LSMR should converge faster than LSQR, which uses the Paige & Saunders algorithm that has been around for almost 30 years. </p> | 2012-08-22 07:23:34.727000+00:00 | 2012-08-22 07:23:34.727000+00:00 | null | null | 12,067,830 | <p>Does anybody know when is better to choose which? They seem the same to me...</p>
<p><a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.lsmr.html#scipy.sparse.linalg.lsmr" rel="noreferrer">lsmr</a>
<a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.lsqr.html#scipy.sparse.linalg.lsqr" rel="noreferrer">lsqr</a></p> | 2012-08-22 07:00:00.273000+00:00 | 2012-08-22 07:23:34.727000+00:00 | 2012-08-22 07:15:07.850000+00:00 | scipy|linear-algebra|sparse-matrix | ['http://arxiv.org/abs/1006.0758'] | 1 |
60,335,630 | <p>There is at least one <a href="https://arxiv.org/pdf/2002.08264.pdf" rel="nofollow noreferrer">paper</a> which adapts the attention mechanism to a non-nlp area: The molecule attention transformer. Molecules are constructed like a graph, similar to a sentence. An atom has a distance to the other atoms and they are dependent to each other, like words are in a sentence. In the paper, they "adapt Transformer (Devlin et al., 2018) to chemical molecules by augmenting the self-attention with inter-atomic distances and molecular graph structure."</p>
<p>But there are probably more application fields for transformers, at least where data has a graph-like structure and nodes are somehow dependent to each other.</p> | 2020-02-21 09:28:11.883000+00:00 | 2020-02-21 10:00:45.497000+00:00 | 2020-02-21 10:00:45.497000+00:00 | null | 60,316,158 | <p>When I am looking for attention implementation examples, encoder-decoder structure with attention always comes to the first. Is there any examples that using attention for other area besides NLP?</p> | 2020-02-20 09:01:29.360000+00:00 | 2020-02-21 10:00:45.497000+00:00 | null | machine-learning|nlp|artificial-intelligence|seq2seq|attention-model | ['https://arxiv.org/pdf/2002.08264.pdf'] | 1 |
55,000,744 | <p>I have answered your questions below. I would suggest to read a little bit more about LSTMs, e.g. <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="nofollow noreferrer">colah's blog post</a>. This will help you understand what it is about, and you will see that your questions are related to the inner workings of an LSTM network.</p>
<p>1) The decoding LSTM network needs something as an input, just as your encoding LSTM used the input data from your dataset. You could either feedback the output of your decoding LSTM, our just repeat the latent state from your encoder (as your code snippet is doing). There are several variations possible, but it seems like most works use the latent vector for initialization of the hidden state in the decoding LSTM, and then feedback the output to the input when rolling out further. (See e.g. <a href="https://openreview.net/pdf?id=r1cLblgCZ" rel="nofollow noreferrer">Recurrent AE model for multidimensional time series representation</a> and <a href="https://arxiv.org/abs/1412.6581" rel="nofollow noreferrer">Variational Recurrent Auto-encoders</a>)</p>
<p>2) Your input dimension is 1, but over 100 time steps. Thus your actual input dimension is 100x1. If you choose the dimension of your hidden layer in the LSTM to be 32, than your input effectively gets reduced from 100x1 to 32.</p>
<p>If you still require more information, someone posted a <a href="https://github.com/keras-team/keras/issues/5203" rel="nofollow noreferrer">similar question</a> on GitHub.</p> | 2019-03-05 10:33:37.880000+00:00 | 2019-03-05 12:45:27.420000+00:00 | 2019-03-05 12:45:27.420000+00:00 | null | 50,874,009 | <p>I am working on a Variational Autoencoder (VAE) to detect anomalies in time series. So far I worked with this tut <a href="https://blog.keras.io/building-autoencoders-in-keras.html" rel="nofollow noreferrer">https://blog.keras.io/building-autoencoders-in-keras.html</a> and this <a href="https://wiseodd.github.io/techblog/2016/12/10/variational-autoencoder/" rel="nofollow noreferrer">https://wiseodd.github.io/techblog/2016/12/10/variational-autoencoder/</a>.</p>
<p>Still, I have some trouble while implementing the VAE.
I have 77093 samples which have 1 dimension. I use timesteps=100 to make predictions. So I reshape my x_train as follows:</p>
<pre><code>x_train.shape = (77093, 100, 1)
</code></pre>
<p>The model:</p>
<pre><code>inputs = Input(shape=(timesteps, input_dim))
encoded = LSTM(32)(inputs)
mu = Dense(1, activation='linear')(encoded)
log_sigma = Dense(1, activation='linear')(encoded)
z = Lambda(sample_z)([mu, log_sigma])
decoded = RepeatVector(timesteps)(z)
decoded = LSTM(1, return_sequences=True)(decoded)
decoded = LSTM(1)(decoded)
sequence_autoencoder = Model(inputs, decoded)
</code></pre>
<p>I sample from:</p>
<pre><code>def sample_z(args):
mu, log_sigma = args
eps = K.random_normal(shape=(50, 1), mean=0., stddev=1.)
return mu + K.exp(log_sigma / 2) * eps
</code></pre>
<p>The model compiles. But I dont know if it is correct.</p>
<p>1.) I dont really understand the RepeatVector Layer and if it is necessary to repeat my sample z. But if I dont use RepeatVector Layer the LSTM-Layer throws an error, because it expects a 3 dim Input.</p>
<p>2.)I am not sore about the dimension reduction in the latent variable. Cause my In_dim=1. What exactly gets reduced?</p>
<p>Thanks in advance.</p> | 2018-06-15 10:45:21.237000+00:00 | 2019-03-05 12:45:27.420000+00:00 | 2018-06-19 11:51:49.267000+00:00 | keras|deep-learning|lstm|autoencoder|inference | ['http://colah.github.io/posts/2015-08-Understanding-LSTMs/', 'https://openreview.net/pdf?id=r1cLblgCZ', 'https://arxiv.org/abs/1412.6581', 'https://github.com/keras-team/keras/issues/5203'] | 4 |
42,974,030 | <p>Rama!</p>
<p>You can use not word2vec, but doc2vec</p>
<p>Or you can receive summary statistic of all word vectors in phrase: e.g. mean of each component of vectors, median of each component of vectors, min, max and so on</p>
<p>It's on of the papers with description of using this technique <a href="https://arxiv.org/abs/1607.01759" rel="nofollow noreferrer">https://arxiv.org/abs/1607.01759</a></p> | 2017-03-23 10:56:26.430000+00:00 | 2017-03-23 10:56:26.430000+00:00 | null | null | 42,953,252 | <p>I have a text file with phrases on each line. If I run the word2vec on this file it gives me a numerical vector by tokenizing the file into words. Like this,</p>
<pre><code>the -0.464252 0.177642 -1.212928 0.737752 0.990782 1.530809 1.053639
0.182065 0.753926 0.082467
of -0.281145 0.060403 -0.877230 0.566957 0.748220 1.108621 0.711598
0.135636 0.489113 0.059783
to -0.352605 0.101068 -0.995506 0.600547 0.809564 1.360837 0.905638
0.114751 0.596093 0.067007
</code></pre>
<p>Instead, I want it to assume each line as a word and output a single vector for each line. Something like this,</p>
<pre><code>Suspension of sitting -0.244289 0.111375 -0.722939 0.366711 0.590016 0.904601 0.622145 0.098230 0.431038 0.008134
</code></pre>
<p>This is the package I'm using. '<a href="https://github.com/danielfrg/word2vec" rel="nofollow noreferrer">https://github.com/danielfrg/word2vec</a>'</p>
<p>How do I accomplish this?</p> | 2017-03-22 13:36:31.200000+00:00 | 2017-03-23 10:56:26.430000+00:00 | 2017-03-22 13:47:27.730000+00:00 | python|machine-learning|nlp|text-mining|word2vec | ['https://arxiv.org/abs/1607.01759'] | 1 |
9,795,832 | <p>This is a very broad question. In general, neural networks with one hidden layer, a nonlinear activation function and a sufficient number of hidden neurons are able to approximate any function with arbitrary precision. However, the error function is not convex and thus the result of the training depends on the initialization.</p>
<p>SVMs are able to approximate any function, too. They are very popular because the optimization problem has a unique solution and there might be some other reasons. But recent research has shown that neural networks like multilayer perceptrons, convolutional neural networks, deep belief neural networks, multi-column deep neural networks etc. are more efficient and achieve better results for complex applications with a huge amount of data. So it is always a trade-off as LiKao stated (no free lunch theorem) and no classifier is "perfect".</p>
<p>Here is a paper that describes the advantages of deep networks in comparison to "shallow networks" which includes Support Vector Machines: <a href="http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf">http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf</a></p>
<p>Here is a standard benchmark and a comparison of different learning algorithms: <a href="http://yann.lecun.com/exdb/mnist/">http://yann.lecun.com/exdb/mnist/</a></p>
<p>Here is a paper that describes a new kind of neural networks that is especially good at solving some vision problems (traffic sign recognition, ocr): <a href="http://arxiv.org/abs/1202.2745">http://arxiv.org/abs/1202.2745</a></p> | 2012-03-20 22:13:21.863000+00:00 | 2012-03-21 19:29:27.880000+00:00 | 2012-03-21 19:29:27.880000+00:00 | null | 9,795,451 | <p>Would I be right in saying a neural network are good at finding 'good enough' solutions for a problem?</p>
<p>I'm thinking this because they don't provide a binary output for an given input but a probability, for example 0.67 could be an output.</p>
<p>I'm also guessing because they're often used for generalisation they're good at find solutions that often solve the problem but in some rare cases won't.</p>
<p>Thank you!</p> | 2012-03-20 21:42:50.143000+00:00 | 2017-03-15 15:36:21.520000+00:00 | 2012-03-20 21:48:27.693000+00:00 | artificial-intelligence|neural-network | ['http://yann.lecun.com/exdb/publis/pdf/bengio-lecun-07.pdf', 'http://yann.lecun.com/exdb/mnist/', 'http://arxiv.org/abs/1202.2745'] | 3 |
40,581,899 | <p>See <a href="https://arxiv.org/abs/1210.2610" rel="nofollow noreferrer">https://arxiv.org/abs/1210.2610</a>, page 5. Here is some example code:</p>
<pre><code>from itertools import chain, count
from functools import lru_cache
@lru_cache(maxsize=None)
def terms(size, level=0):
if size == 0:
return tuple(range(level))
else:
abstractions = (
('abs', term)
for term in terms(size - 1, level + 1)
)
applications = (
('app', term1, term2)
for i in range(size)
for term1 in terms(i, level)
for term2 in terms(size - 1 - i, level)
)
return tuple(chain(abstractions, applications))
def string(term):
if isinstance(term, tuple):
if term[0] == 'abs':
return '(λ {})'.format(string(term[1]))
elif term[0] == 'app':
return '({} {})'.format(string(term[1]), string(term[2]))
else:
return term
for size in count():
print('{} terms of size {}'.format(len(terms(size)), size))
for term in terms(size):
pass # input(string(term))
</code></pre>
<p>This outputs</p>
<pre><code>0 terms of size 0
1 terms of size 1
3 terms of size 2
14 terms of size 3
82 terms of size 4
579 terms of size 5
4741 terms of size 6
43977 terms of size 7
454283 terms of size 8
</code></pre>
<p>and so on (i.e. <a href="http://oeis.org/A220894" rel="nofollow noreferrer">this sequence</a>).</p> | 2016-11-14 04:29:12.677000+00:00 | 2016-11-14 04:29:12.677000+00:00 | null | null | 21,012,577 | <p>What is an algorithm that will enumerate expressions for the lambda calculus by order of length? For example, <code>(λx.x), (λx.(x x)), (λx.(λy.x))</code> and so on?</p> | 2014-01-09 05:54:43.737000+00:00 | 2016-11-14 04:29:12.677000+00:00 | null | algorithm|functional-programming|lambda-calculus | ['https://arxiv.org/abs/1210.2610', 'http://oeis.org/A220894'] | 2 |
40,957,646 | <p>The paper Faster R-CNN encodes the rectangles and the anchors as x_center,y_center,width and height.
This also depends on your choice of encoding the anchors I think. If you used the code from the original publication though I think you should refactor the boxes as described on the paper</p>
<blockquote>
<p>For bounding box regression, we adopt the parameterizations of the 4 coordinates following [5]:</p>
<p>[...]</p>
<p>Where x, y, w, and h denote the box’s center coordinates and its width and height. Variables x, xa, and x∗ are for the predicted box, anchor box, and groundtruth box respectively (likewise for y, w, h)</p>
</blockquote>
<hr />
<p>Source: page 5 of <a href="https://arxiv.org/pdf/1506.01497v3" rel="nofollow noreferrer">https://arxiv.org/pdf/1506.01497v3</a></p> | 2016-12-04 10:27:58.620000+00:00 | 2016-12-04 10:27:58.620000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 37,493,161 | <p>I have been training Faster RCNN over custom dataset but with some anomalous results. The network's performance deteriorates for bot validation and training data, with the increase in training iterations. Even though the loss is decreasing, which is surprising. The objective is to detect leaves.</p>
<p>Below are the images at 200 and 165000 iterations respectively </p>
<p><a href="https://i.stack.imgur.com/McQZm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/McQZm.jpg" alt="Output at 200 Iterations "></a></p>
<p><a href="https://i.stack.imgur.com/wiB0V.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wiB0V.jpg" alt="output at 165000 Iterations"></a></p>
<p>The thing to note here is after 165000 iterations, the network starts to draw boxes at background too.</p>
<p>I think this is because of some fault in annotations for training data, as loss is decreasing with the training.</p>
<p>The annotations file that I made has a coordinate system similar to matlab, i.e. (0,0) as top left of the image and thus for each bounding box top left corner is (x_min, y_min) and bottom right is (x_max,y_max). Is this the way it is supposed to be, if that is so, what else could the problem be?</p> | 2016-05-27 22:30:16.957000+00:00 | 2016-12-04 10:27:58.620000+00:00 | null | computer-vision|deep-learning|caffe|conv-neural-network|pycaffe | ['https://arxiv.org/pdf/1506.01497v3'] | 1 |
56,828,558 | <p>This is exactly how the <a href="https://arxiv.org/abs/1810.04805" rel="nofollow noreferrer">BERT</a> model was trained: mask some random words in the sentence, and make your network predict these words. So yes, it is feasible. And not, it is not necessary to have the list of suggested words as a training input. However, these suggested words should be the part of the overall vocabulary with which this BERT has been trained.</p>
<p>I adapted <a href="https://stackoverflow.com/questions/54978443/predicting-missing-words-in-a-sentence-natural-language-processing-model">this answer</a> to show how the completion function may work. </p>
<pre><code># install this package to obtain the pretrained model
# ! pip install -U pytorch-pretrained-bert
import torch
from pytorch_pretrained_bert import BertTokenizer, BertForMaskedLM
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForMaskedLM.from_pretrained('bert-base-uncased')
model.eval(); # turning off the dropout
def fill_the_gaps(text):
text = '[CLS] ' + text + ' [SEP]'
tokenized_text = tokenizer.tokenize(text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
segments_ids = [0] * len(tokenized_text)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
with torch.no_grad():
predictions = model(tokens_tensor, segments_tensors)
results = []
for i, t in enumerate(tokenized_text):
if t == '[MASK]':
predicted_index = torch.argmax(predictions[0, i]).item()
predicted_token = tokenizer.convert_ids_to_tokens([predicted_index])[0]
results.append(predicted_token)
return results
print(fill_the_gaps(text = 'I bought an [MASK] because its rainy .'))
print(fill_the_gaps(text = 'Im sad because you are [MASK] .'))
print(fill_the_gaps(text = 'Im worried because you are [MASK] .'))
print(fill_the_gaps(text = 'Im [MASK] because you are [MASK] .'))
</code></pre>
<p>The <code>[MASK]</code> symbol indicates the missing words (there can be any number of them). <code>[CLS]</code> and <code>[SEP]</code> are BERT-specific special tokens. The outputs for these particular prints are</p>
<pre><code>['umbrella']
['here']
['worried']
['here', 'here']
</code></pre>
<p>The duplication is not surprising - transformer NNs are generally good at copying words. And from semantic point of view, these symmetric continuations look indeed very likely. </p>
<p>Moreover, if it is not a random word which is missing, but exactly the last word (or last several words), you can utilize any language model (e.g. another famous SOTA language model, <a href="https://openai.com/blog/better-language-models/" rel="nofollow noreferrer">GPT-2</a>) to complete the sentence. </p> | 2019-06-30 22:45:09.703000+00:00 | 2019-06-30 22:45:09.703000+00:00 | null | null | 56,822,991 | <p>I have somewhat read a bunch of papers which talks about predicting missing words in a sentence. What I really want is to create a model that suggest a word from an incomplete sentence. </p>
<pre><code> Example:
Incomplete Sentence :
I bought an ___________ because its rainy.
Suggested Words:
umbrella
soup
jacket
</code></pre>
<p>From the journal I have read, they have utilized Microsoft Sentence Completion Dataset for predicting missing words from a sentence. </p>
<pre><code> Example :
Incomplete Sentence :
Im sad because you are __________
Missing Word Options:
a) crying
b) happy
c) pretty
d) sad
e) bad
</code></pre>
<p>I don't want to predict a missing word from a list of options. I want to suggest a list of words from an incomplete sentence. Is it feasible? Please enlighten me cause Im really confused. What is state of the art model I can use for suggesting a list of words (semantically coherent) from an incomplete sentence?</p>
<p>Is it necessary that the list of suggested words as an output is included in the training dataset? </p> | 2019-06-30 06:53:32.710000+00:00 | 2019-06-30 22:45:09.703000+00:00 | 2019-06-30 09:50:55.863000+00:00 | nlp | ['https://arxiv.org/abs/1810.04805', 'https://stackoverflow.com/questions/54978443/predicting-missing-words-in-a-sentence-natural-language-processing-model', 'https://openai.com/blog/better-language-models/'] | 3 |
64,738,482 | <p>Looks like in this scenario hash join can't be beaten having seen <a href="https://cs-people.bu.edu/mathan/reading-groups/papers-classics/join.pdf" rel="nofollow noreferrer">this</a>, <a href="https://people.eecs.berkeley.edu/%7Efox/summaries/database/join.html" rel="nofollow noreferrer">this</a>, <a href="https://docs.teradata.com/reader/Ws7YT1jvRK2vEr1LpVURug/chrMh_r4k9OgT9J6rWagsA" rel="nofollow noreferrer">this</a>, <a href="https://databricks.com/de/session_na20/on-improving-broadcast-joins-in-apache-spark-sql" rel="nofollow noreferrer">this</a>, <a href="https://kafka.apache.org/23/documentation/streams/developer-guide/dsl-api.html#streams-developer-guide-dsl-joins" rel="nofollow noreferrer">this</a>, <a href="https://arxiv.org/pdf/1805.05874.pdf" rel="nofollow noreferrer">this</a> and the comments to the OP. But it can be matched, <a href="https://en.wikipedia.org/wiki/Sort-merge_join" rel="nofollow noreferrer">sort-merge-join</a> (which needs objects that are comparable, not just equal or not) has the same run time complexity, yet it can't beat the simplicity of the hash join implementation.</p>
<p>Still one can <a href="http://www.vldb.org/pvldb/vol8/p353-barber.pdf" rel="nofollow noreferrer">tweak the Hashmap</a> as such, go into parallel algorithms and consider hardware related aspects (see the references <a href="http://www.vldb.org/pvldb/vol8/p353-barber.pdf" rel="nofollow noreferrer">here</a>).</p> | 2020-11-08 13:16:07.350000+00:00 | 2020-11-08 21:59:39.680000+00:00 | 2020-11-08 21:59:39.680000+00:00 | null | 64,737,565 | <p>Say we have 2 Collections (that fit into memory) with elements that can be tested for equality (not necessarily overriding <code>equals()</code>), e.g.</p>
<pre><code>Collection<Integer> c1 = List.of(1,2,3,4,5);
Collection<Integer> c2 = List.of(0,2,3,5,9);
</code></pre>
<p>And some non-parallel method <code>equiJoin(c1, c2)</code> that gives you a <code>Collection<Tuple<Integer, Integer>></code> of elements in both lists that are equal <code>((2,2), (3,3), (5,5))</code>. See <a href="https://de.wikibooks.org/wiki/Relationenalgebra_und_SQL:_Equi-Join" rel="nofollow noreferrer">this</a> for its treatment in the RDBMS context. Creating pairs of equal integers is pretty useless, but think of of any case where lists of records, say one from a db and one from a web service call need to be matched like in <a href="https://stackoverflow.com/questions/64731157/optimizing-nested-loops-for-joining-lists-while-reading-json/64731681#64731681">this post</a>. And let's assume, the equality boils down to simple types and doesn't create a whole lot of complexity on its own.</p>
<p>Joins have been chewed on to death in the area of <a href="https://en.wikipedia.org/wiki/Category:Join_algorithms" rel="nofollow noreferrer">relational databases</a>, but not so much in general programming as it seems.</p>
<p>What we often see (like <a href="https://stackoverflow.com/questions/64731157/optimizing-nested-loops-for-joining-lists-while-reading-json/64731681#64731681">here</a>) is the nested loop approach with a not so nice run time complexity of <strong>O(M*N)</strong>. I know only of one <a href="https://github.com/jOOQ/jOOL" rel="nofollow noreferrer">framework</a> that incorporates equiJoin in Java streams, but it <a href="https://github.com/jOOQ/jOOL/blob/main/jOOL/src/main/java/org/jooq/lambda/SeqImpl.java" rel="nofollow noreferrer">looks like</a> it also does nested loops (it actually allows for arbitrary functions, not just equality, that's a reason for nested loop).</p>
<pre><code> List<Tuple<Integer, Integer>> joined = new ArrayList<>();
for (Integer i : c1) {
for (Integer j : c2) {
if (i.equals(j)) {
joined.add(new Tuple<>(i, j));
}
}
}
</code></pre>
<p>If you want to do better you go for a hash join with much nicer <strong>O(M+N)</strong>:</p>
<pre><code> List<Tuple<Integer, Integer>> joined = new ArrayList<>();
HashMap<Integer, Integer> C1 = new HashMap<>();
// imagine its not Integer but e.g. bank accounts and a Map<Key, Account>
c1.forEach(i -> C1.put(i, i));
for (Integer i : c2) {
Integer j = C1.get(i);
if (j != null) {
joined.add(new Tuple<>(i, j));
}
}
</code></pre>
<p>Are there aspects in the problem that can be utilized to improve which I didn't see, or are there existing algorithms/implementations that do better? Like some exotic binary encoding or reduction of search space while matching, things like that. Everything I can think of isn't really promising or likely introduces more work than it eliminates. Perhaps there's even consensus or proof that you can't do better than hash join, then that's also an answer.</p> | 2020-11-08 11:30:50.927000+00:00 | 2020-11-09 00:24:54.290000+00:00 | 2020-11-09 00:24:54.290000+00:00 | java|algorithm|collections|inner-join | ['https://cs-people.bu.edu/mathan/reading-groups/papers-classics/join.pdf', 'https://people.eecs.berkeley.edu/%7Efox/summaries/database/join.html', 'https://docs.teradata.com/reader/Ws7YT1jvRK2vEr1LpVURug/chrMh_r4k9OgT9J6rWagsA', 'https://databricks.com/de/session_na20/on-improving-broadcast-joins-in-apache-spark-sql', 'https://kafka.apache.org/23/documentation/streams/developer-guide/dsl-api.html#streams-developer-guide-dsl-joins', 'https://arxiv.org/pdf/1805.05874.pdf', 'https://en.wikipedia.org/wiki/Sort-merge_join', 'http://www.vldb.org/pvldb/vol8/p353-barber.pdf', 'http://www.vldb.org/pvldb/vol8/p353-barber.pdf'] | 9 |
58,744,782 | <p><a href="https://arxiv.org/pdf/1811.03716.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1811.03716.pdf</a> has a description</p>
<p>Summary:</p>
<ul>
<li><p>Say router Rdst wants to influence the path that inbound traffic takes, say for example traffic from router Rsrc to router Rdst.</p></li>
<li><p>Say the shortest path from Rsrc to Rdst goes through some intermediate router Rint.</p></li>
<li><p>Let's say router Rdst in autonomous system ASdst, router Rsrc is autonomous system ASsrc, router Rint is in autonomous system ASint, etc. (see figure below)</p></li>
<li><p>In particular, router Rdst wants to enforce that the traffic does <em>not</em> go through router Rint but instead takes some longer route, for example through some alternative routers Ralt1 and Ralt2 (once again, see figure below).</p></li>
<li><p>To achieve this, router Rdst "poisons" its routes when it sends out BGP advertisements for its own destination prefix:</p>
<ul>
<li><p>Instead of advertising the normal AS-path (ASdst), it instead advertises (ASdst, ASint, ASdst).</p></li>
<li><p>Note that Rdst is "lying": it claims that the path already went through ASint, when in fact it didn't.</p></li>
<li><p>It also adds an extra ASdst to make sure that the first AS in the AS-path still looks normal (= the AS of the advertising router).</p></li>
<li><p>When Rint receives the BGP UPDATE advertised by Rdst, it will see that there is a loop in the AS-path and treat the UPDATE as a withdraw. In particular, not propagate the advertisement to Rsrc.</p></li>
<li><p>On the other hand, the BGP advertisement will propagate normally from Rdst to Ralt1 to Ralt2 to Rsrc.</p></li>
<li><p>Hence, from the perspective of Rsrc, the only remaining feasible path is Rsrc -> Ralt2 -> Ralt1 -> Rdst.</p></li>
</ul></li>
<li><p>Ergo: Rdst has achieved its goal of forcing the traffic to avoid Rint.</p></li>
</ul>
<pre>
Rdst (ASdst)
____/ \_____
/ \
Ralt1 (ASalt1) Rint (ASint)
| |
Ralr2 (ASalt2) |
\____ ____/
\ /
Rsrc (ASsrc)
</pre> | 2019-11-07 08:36:04.517000+00:00 | 2019-11-09 10:02:52.587000+00:00 | 2019-11-09 10:02:52.587000+00:00 | null | 58,592,323 | <p>I found statements "BGP poisoning" and "poisoned AS" in several papers and sometimes it seems to refer to something that is done the achieve a certain thing sometimes it is considered something bad but it is never explained what exactly "BGP poisoning" actually is. </p>
<p>As I wasn't able to find an answer to this question myself I would appreciate if you could provide me your understanding of the concept.</p> | 2019-10-28 14:01:25.430000+00:00 | 2019-11-09 10:02:52.587000+00:00 | null | routing|bgp | ['https://arxiv.org/pdf/1811.03716.pdf'] | 1 |
68,026,899 | <p>I have no idea what the <code>tinyTextR</code> package's <code>Doc2Vec</code> function that you've mentioned is doing - Google searches turn up no documentation of its functionality. But if it's instant, and it requires word-vectors as an input, perhaps it's just averaging all the word-vectors for the text's words together.</p>
<p>You can read all about Gensim's <code>Doc2Vec</code> model in the Gensim documentation:</p>
<p><a href="https://radimrehurek.com/gensim/models/doc2vec.html" rel="nofollow noreferrer">https://radimrehurek.com/gensim/models/doc2vec.html</a></p>
<p>As its intro explains:</p>
<blockquote>
<p>Learn paragraph and document embeddings via the distributed memory and distributed bag of words models from <a href="https://arxiv.org/abs/1405.4053v2" rel="nofollow noreferrer">Quoc Le and Tomas Mikolov: “Distributed Representations of Sentences and Documents”</a>.</p>
</blockquote>
<p>The algorithm that Gensim <code>Doc2Vec</code> implements is also commonly called 'Paragraph Vector' by its authors, including in the followup paper by Le et al <a href="https://arxiv.org/abs/1507.07998" rel="nofollow noreferrer">"Document Embeddings With Paragraph Vector"</a>.</p>
<p>'Paragraph Vector' uses a word2vec-like training process to learn text-vectors for paragraphs (or other texts of many words). This process does <em>not</em> require prior word-vectors as an input, but many modes will co-train word-vectors along with the doc-vectors. It <em>does</em> require training on a set of documents, but after training the <code>.infer_vector()</code> method can be used to train-up vectors for new texts, not in the original training set, to the extent they use the same words. (Any new words in such post-model-training documents will be ignored.)</p>
<p>You might be able to approximate your R function with something simple like an average-of-word-vectors.</p>
<p>Or, you could try the alternate <code>Doc2Vec</code> in Gensim.</p>
<p>But, the Gensim <code>Doc2Vec</code> is definitely something different, and it's unfortunate the two libraries use the same <code>Doc2Vec</code> name for different processes.</p> | 2021-06-17 21:48:47.987000+00:00 | 2021-06-17 21:48:47.987000+00:00 | null | null | 68,025,964 | <p>I have been tasked with putting a document vector model into production.
I am an R user, and so my original model is in R. One of the avenues we have is to recreate the code and the models in Python.</p>
<p><strong>I am confused by the Gensim implementation of Doc2vec</strong>.</p>
<p>The process that works in R goes like this:</p>
<p><strong>Offline</strong></p>
<hr />
<ul>
<li><p>Word vectors are trained using the functions in the <code>text2vec</code> package, namely GloVe or GlobalVectors, on a large corpus This gives me a large Word Vector text file.</p>
</li>
<li><p>Before the ML step takes place, the <code>Doc2Vec</code> function from the <code>TextTinyR</code> library is used to turn each piece of text from a smaller, more specific training corpus into a vector. <em>This is not a machine learning step. No model is trained</em>. The Doc2Vec function effectively aggregates the word vectors in the sentence, in the same sense that finding the sum or mean of vectors does, but in a way that preserves information about word order.</p>
</li>
<li><p>Various models are then trained on these smaller text corpuses.</p>
</li>
</ul>
<hr />
<p><strong>Online</strong></p>
<hr />
<ul>
<li>The new text is converted to Document Vectors using the pretrained word vectors.</li>
<li>The Document Vectors are fed into the pretrained model to obtain the output classification.</li>
</ul>
<hr />
<p><strong>The example code I have found for Gensim appears to be a radical departure from this.</strong></p>
<p>It appears in <code>gensim</code> that Doc vectors are a separate class of model from word vectors that you can train. It seems in some cases, the word vectors and doc vectors are all trained at once. Here are some examples from tutorials and stackoverflow answers:</p>
<p><a href="https://medium.com/@mishra.thedeepak/doc2vec-simple-implementation-example-df2afbbfbad5" rel="nofollow noreferrer">https://medium.com/@mishra.thedeepak/doc2vec-simple-implementation-example-df2afbbfbad5</a></p>
<p><a href="https://stackoverflow.com/questions/27470670/how-to-use-gensim-doc2vec-with-pre-trained-word-vectors">How to use Gensim doc2vec with pre-trained word vectors?</a></p>
<p><a href="https://stackoverflow.com/questions/36815038/how-to-load-pre-trained-model-with-in-gensim-and-train-doc2vec-with-it?rq=1">How to load pre-trained model with in gensim and train doc2vec with it?</a></p>
<p><a href="https://stackoverflow.com/questions/45037860/gensim1-0-1-doc2vec-with-google-pretrained-vectors?noredirect=1&lq=1">gensim(1.0.1) Doc2Vec with google pretrained vectors</a></p>
<p>So my questions are these:</p>
<p><strong>Is the gensim implementation of Doc2Vec fundamentally different from the TextTinyR implementation?</strong></p>
<p><strong>Or is the gensim doc2vec model basically just encapsulating the word2vec model and the doc2vec process into a single object?</strong></p>
<p><strong>Is there anything else I'm missing about the process?</strong></p> | 2021-06-17 20:09:41.287000+00:00 | 2021-06-19 17:28:20.967000+00:00 | null | python|r|gensim|word2vec|doc2vec | ['https://radimrehurek.com/gensim/models/doc2vec.html', 'https://arxiv.org/abs/1405.4053v2', 'https://arxiv.org/abs/1507.07998'] | 3 |
45,943,231 | <p>This problem has been tackled by Yann LeCun in the 90's. You can find demos and papers on his <a href="http://yann.lecun.com/exdb/lenet/" rel="nofollow noreferrer">website</a>. </p>
<p>A not so general solution is to train a CNN on single digits MNIST and use this CNN to perform inference on images like the one you provided. Prediction is done by sliding the trained CNN on the multi-digit image and applying post processing to aggregate the results and possibly estimating the bounding boxes.</p>
<p>A very general solution that can handle a variable number of number and of different scales and positions is to build a model that is able to predict the bounding boxes of the numbers and perform classification on them. There's a recent history of such models with R-CNN, Fast-RCNN and <a href="https://arxiv.org/abs/1506.01497" rel="nofollow noreferrer">Faster-RCNN</a>. </p>
<p>You can find a python implementation of Faster-RCNN on <a href="https://github.com/rbgirshick/py-faster-rcnn" rel="nofollow noreferrer">github.</a></p> | 2017-08-29 15:39:46.557000+00:00 | 2017-08-29 15:39:46.557000+00:00 | null | null | 43,225,218 | <p>How can I train the model to recognize five numbers in one picture.
The code is as follows:</p>
<pre><code>from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dropout, Dense, Input
from keras.models import Model, Sequential
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=(28, 140, 1)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dropout(0.5))
</code></pre>
<p>Here should be a loop for recognizing each number in the picture, but I don't know how to realize it.</p>
<pre><code>model.add(Dense(11, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(X_train, y_train,
batch_size=1000,
epochs=8,
verbose=1,
validation_data=(X_valid, y_valid))
</code></pre>
<p>The picture of combined mnist number is as follows:</p>
<p><img src="https://i.stack.imgur.com/uOFFU.png" alt="combined numbers in one picture"></p> | 2017-04-05 07:57:39.007000+00:00 | 2017-09-04 15:57:07.510000+00:00 | 2017-09-01 12:17:59.860000+00:00 | python|machine-learning|deep-learning|keras|mnist | ['http://yann.lecun.com/exdb/lenet/', 'https://arxiv.org/abs/1506.01497', 'https://github.com/rbgirshick/py-faster-rcnn'] | 3 |
45,972,869 | <p>The classic work in this area is <a href="http://arxiv.org/abs/1312.6082" rel="nofollow noreferrer">'Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks'</a> </p>
<p>Keras model (functional, not sequential):</p>
<pre><code>inputs = Input(shape=(28, 140, 1), name="input")
x = inputs
x = Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 140, 1))(x)
x = Conv2D(64, (3, 3), activation='relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
x = Dropout(0.25)(x)
x = Flatten()(x)
x = Dropout(0.5)(x)
digit1 = Dense(10, activation='softmax', name='digit1')(x)
digit2 = Dense(10, activation='softmax', name='digit2')(x)
digit3 = Dense(10, activation='softmax', name='digit3')(x)
digit4 = Dense(10, activation='softmax', name='digit4')(x)
digit5 = Dense(10, activation='softmax', name='digit5')(x)
predictions = [digit1,digit2,digit3,digit4,digit5]
model = Model(inputs=inputs, outputs=predictions)
model.compile(optimizer=Adam(), metrics=['accuracy'], oss='categorical_crossentropy')
</code></pre>
<p>PS
You may use 11 classes for 10 digits and empty space.</p> | 2017-08-31 03:41:29.130000+00:00 | 2017-09-04 15:57:07.510000+00:00 | 2017-09-04 15:57:07.510000+00:00 | null | 43,225,218 | <p>How can I train the model to recognize five numbers in one picture.
The code is as follows:</p>
<pre><code>from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dropout, Dense, Input
from keras.models import Model, Sequential
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=(28, 140, 1)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dropout(0.5))
</code></pre>
<p>Here should be a loop for recognizing each number in the picture, but I don't know how to realize it.</p>
<pre><code>model.add(Dense(11, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(X_train, y_train,
batch_size=1000,
epochs=8,
verbose=1,
validation_data=(X_valid, y_valid))
</code></pre>
<p>The picture of combined mnist number is as follows:</p>
<p><img src="https://i.stack.imgur.com/uOFFU.png" alt="combined numbers in one picture"></p> | 2017-04-05 07:57:39.007000+00:00 | 2017-09-04 15:57:07.510000+00:00 | 2017-09-01 12:17:59.860000+00:00 | python|machine-learning|deep-learning|keras|mnist | ['http://arxiv.org/abs/1312.6082'] | 1 |
66,161,128 | <p>No, it just has to be a sequence like requirement.</p>
<p><a href="https://arxiv.org/abs/1503.04069" rel="nofollow noreferrer">Klaus Greff, et al., LSTM: A Search Space Odyssey, 2015</a> :
Since LSTMs are effective at capturing long-term temporal dependencies without suffering from the optimization hurdles that plague simple recurrent networks (SRNs), they have been used to advance the state of the art for many difficult problems. This includes handwriting recognition and generation, language modeling and translation, acoustic modeling of speech, speech synthesis, protein secondary structure prediction, analysis of audio, and video data among others.</p>
<p><a href="https://www.mitpressjournals.org/doi/abs/10.1162/089976600300015015" rel="nofollow noreferrer">Felix A. Gers, et al., Learning to Forget: Continual Prediction with LSTM, 2000</a> : LSTM holds promise for any sequential processing task in which we suspect that a hierarchical decomposition may exist, but do not know in advance what this decomposition is.</p> | 2021-02-11 19:00:37.313000+00:00 | 2021-02-11 19:00:37.313000+00:00 | null | null | 66,145,930 | <p>Can we use Seq2Seq model with input data that has no temporal relation ( not a time series )? For example I have a list of image regions that I would like to feed my seq2seq model. And the the model should predict an description ( output is time series |) or captions.</p>
<p>I’m not asking from the technical perspective, I know that if the data is in the correct format then I can do that. My question is rather theoretical, is it ok to use Seq2Seq with none time series data? And are there any papers/articles/references of using Seq2Seq in this setting ?</p> | 2021-02-10 22:17:19.360000+00:00 | 2021-02-11 19:00:37.313000+00:00 | 2021-02-11 12:39:24.630000+00:00 | nlp|computer-vision|lstm|seq2seq | ['https://arxiv.org/abs/1503.04069', 'https://www.mitpressjournals.org/doi/abs/10.1162/089976600300015015'] | 2 |
50,021,204 | <p>The fastest Betweenness algorithm that I'm aware of is <a href="http://algo.uni-konstanz.de/publications/b-fabc-01.pdf" rel="nofollow noreferrer">the one by Brandes</a>. It runs in O(nm), i.e., if your graph is sparse, that's significantly faster. The algorithm is reasonably complex and probably not implemented in neo4j. However, implementing it should be possible.</p>
<p>Why are you using neo4j? Are your graphs so large that they don't fit in main memory? I don't know neo4j, but that seems their selling point: They can do computations in external memory. If that is not the case, I would strongly recommend to load the whole graph into some simple representation in RAM, and doing your computation there. That's almost always faster than working on some database backend.</p>
<p>However, if your graphs are too large for main memory, you might want to consider alternative algorithms that are especially suited for external computation. You perhaps also want to consider algorithms that don't compute Betweenness exactly, but only approximatively. </p>
<p>Let me quote from the work of <a href="https://arxiv.org/pdf/1510.07971.pdf" rel="nofollow noreferrer">Bergamini and Meyerhenke</a>, which gives a very good literature overview (the references in […] can be found in the paper):</p>
<blockquote>
<p>The fastest existing
method for the exact BC computation, BA, requires Θ(nm) operations for
unweighted graphs and Θ(nm+n 2 log n) for graphs with positive edge
weights [7]. BA computes [… description of the algorithm left out …] Based on this concept, some algorithms for an approximation of
BC have been developed. Brandes and Pich [8] propose to approximate
cB(v) by extrapolating it from the contributions of a subset of source
nodes, also called pivots. Selecting the pivots uniformly at random,
the approximation can be proven to be an unbiased estimator for cB(v)
(i.e. its expectation is equal to cB(v)). In a subsequent work,
Geisberger et al. [14] notice that this can overestimate BC scores of
nodes close to the pivots. To limit this bias, they introduce a
scaling function which gives less importance to contributions from
pivots that are close to the node. Bader et al. [1] approximate the BC
of a specific node only, based on an adaptive sampling technique that
reduces the number of pivots for nodes with high centrality.
Chehreghani [9] proposes alternative sampling techniques that try to
minimize the number of samples. Different from the previous approaches
is the approximation algorithm by Riondato and Kornaropoulos [25],
which samples a single random shortest path at each iteration. This
approach allows a theoretical guarantee on the quality of
approximation. </p>
</blockquote> | 2018-04-25 11:31:24.323000+00:00 | 2018-04-25 11:31:24.323000+00:00 | null | null | 49,973,713 | <p>Is anyone aware of a faster method to find betweenness of all nodes in a graph database in neo4j?</p>
<p>Currently, I am using an O(n^2) solution where I find the shortest path between each possible pair of nodes.</p>
<p>Any leads or implementations will be much appreciated. Especially if it's in python.</p> | 2018-04-23 05:04:06.030000+00:00 | 2018-04-25 11:31:24.323000+00:00 | null | python|neo4j|graph-algorithm | ['http://algo.uni-konstanz.de/publications/b-fabc-01.pdf', 'https://arxiv.org/pdf/1510.07971.pdf'] | 2 |
40,567,467 | <p>From <a href="https://stats.stackexchange.com/q/232393/12359">Is there any method for choosing the number of layers and neurons?</a>:</p>
<p>There is no direct way to find the optimal number of them: people empirically try and see (e.g., using cross-validation). The most common search techniques are random, manual, and grid searches. </p>
<p>There exist more advanced techniques such as</p>
<p>1) Gaussian processes. Example:</p>
<ul>
<li>Franck Dernoncourt, Ji Young Lee <a href="http://arxiv.org/abs/1609.08703" rel="nofollow noreferrer">Optimizing Neural Network Hyperparameters with Gaussian Processes for Dialog Act Classification</a>, IEEE SLT 2016.</li>
</ul>
<p>2) <a href="https://en.wikipedia.org/wiki/Neuroevolution" rel="nofollow noreferrer">Neuro-evolution</a>. Examples:</p>
<ul>
<li>Zaremba, Wojciech. Ilya Sutskever. Rafal Jozefowicz "<a href="https://scholar.google.com/scholar?cluster=9021528542952156548&hl=en&as_sdt=0,22" rel="nofollow noreferrer">An empirical exploration of recurrent network architectures.</a>" (2015): used evolutionary computation to find optimal RNN structures.</li>
<li>Franck Dernoncourt. "<a href="http://www.francky.me/doc/mrf2011-HEC-ISIR-ENS_en.pdf" rel="nofollow noreferrer">The medial Reticular Formation: a neural substrate for action selection? An evaluation via evolutionary computation.</a>". Master's Thesis. École Normale
Supérieure Ulm. 2011. Used evolutionary computation to find connections in the ANN.</li>
<li>Bayer, Justin, Daan Wierstra, Julian Togelius, and Jürgen Schmidhuber. "<a href="https://scholar.google.com/scholar?cluster=14945304742464379854&hl=en&as_sdt=0,22" rel="nofollow noreferrer">Evolving memory cell structures for sequence learning.</a>" In International Conference on Artificial Neural Networks, pp. 755-764. Springer Berlin Heidelberg, 2009.: used evolutionary computation to find optimal RNN structures.</li>
</ul>
<p>Also relevant: <a href="https://stats.stackexchange.com/q/244975/12359">To design a Multilayer Perceptron, should I use more units per layer and less layers or more layers and less units, which is better?</a></p> | 2016-11-12 20:44:57.030000+00:00 | 2016-11-12 20:44:57.030000+00:00 | 2017-04-13 12:44:13.837000+00:00 | null | 40,563,017 | <p>I am working on Deep Neural Networks and was wondering about the following question:</p>
<p>What is the best number of layers and number of neurons per layer in general for <strong><em>optimum accuracy</em></strong>?</p>
<p>As per this picture:
<a href="https://i.stack.imgur.com/4SV9A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4SV9A.png" alt="Image1"></a></p>
<p>Would the optimum numbers be equal to the feature size, so that each feature's influence on each other set of features is taken into account? </p>
<p>Also, would the answer differ if we were looking for <strong><em>optimum accuracy and efficiency</em></strong>?</p>
<p>Thank you, any insights are appreciated!</p>
<p>Edit:</p>
<p>These answers are informative. I still feel like they don't address specifically the first part of my question. To clarify: Is there a maximum amount of neurons and layers that when applied would be equally granular to the data, and thus adding more neurons or layers would be redundant? I assume infinite layers to a 3 feature data set would at some point become unnecessary. Thanks again for all reads and replies!</p> | 2016-11-12 12:52:07.400000+00:00 | 2016-11-13 17:23:43.133000+00:00 | 2016-11-13 17:23:43.133000+00:00 | optimization|machine-learning|neural-network|artificial-intelligence|deep-learning | ['https://stats.stackexchange.com/q/232393/12359', 'http://arxiv.org/abs/1609.08703', 'https://en.wikipedia.org/wiki/Neuroevolution', 'https://scholar.google.com/scholar?cluster=9021528542952156548&hl=en&as_sdt=0,22', 'http://www.francky.me/doc/mrf2011-HEC-ISIR-ENS_en.pdf', 'https://scholar.google.com/scholar?cluster=14945304742464379854&hl=en&as_sdt=0,22', 'https://stats.stackexchange.com/q/244975/12359'] | 7 |
40,563,104 | <p>There is no general answer to your question. Such quantities are called hyper-parameters and their choosing is an open problem, and a big part of the art of machine learning. <a href="https://www.quora.com/What-are-hyperparameters-in-machine-learning" rel="nofollow noreferrer">Here</a> is a discussion on the topic on Quora.</p>
<p>For a good introduction into neural networks and their inner-workings, see <a href="http://neuralnetworksanddeeplearning.com/chap3.html" rel="nofollow noreferrer">improving the way neural networks learn</a>.</p>
<p>To gain intuition on choosing such hyper-parameters, and constructing networks architecture, one would be wise to study known successful models:</p>
<p><a href="http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf" rel="nofollow noreferrer">LeNet</a> : The first successful applications of Convolutional Networks were developed by Yann LeCun in 1990’s. Of these, the best known is the LeNet architecture that was used to read zip codes, digits, etc.</p>
<p><a href="http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks" rel="nofollow noreferrer">AlexNet</a> : The first work that popularized Convolutional Networks in Computer Vision</p>
<p><a href="http://arxiv.org/abs/1409.4842" rel="nofollow noreferrer">GoogleNet</a> : The ILSVRC 2014 winner</p>
<p>Study how they are designed for the particulars of the problem being solved.</p> | 2016-11-12 13:03:56.320000+00:00 | 2016-11-12 13:12:30.103000+00:00 | 2016-11-12 13:12:30.103000+00:00 | null | 40,563,017 | <p>I am working on Deep Neural Networks and was wondering about the following question:</p>
<p>What is the best number of layers and number of neurons per layer in general for <strong><em>optimum accuracy</em></strong>?</p>
<p>As per this picture:
<a href="https://i.stack.imgur.com/4SV9A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4SV9A.png" alt="Image1"></a></p>
<p>Would the optimum numbers be equal to the feature size, so that each feature's influence on each other set of features is taken into account? </p>
<p>Also, would the answer differ if we were looking for <strong><em>optimum accuracy and efficiency</em></strong>?</p>
<p>Thank you, any insights are appreciated!</p>
<p>Edit:</p>
<p>These answers are informative. I still feel like they don't address specifically the first part of my question. To clarify: Is there a maximum amount of neurons and layers that when applied would be equally granular to the data, and thus adding more neurons or layers would be redundant? I assume infinite layers to a 3 feature data set would at some point become unnecessary. Thanks again for all reads and replies!</p> | 2016-11-12 12:52:07.400000+00:00 | 2016-11-13 17:23:43.133000+00:00 | 2016-11-13 17:23:43.133000+00:00 | optimization|machine-learning|neural-network|artificial-intelligence|deep-learning | ['https://www.quora.com/What-are-hyperparameters-in-machine-learning', 'http://neuralnetworksanddeeplearning.com/chap3.html', 'http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf', 'http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks', 'http://arxiv.org/abs/1409.4842'] | 5 |
59,979,428 | <p>Based on my research and also this paper as reference <a href="https://arxiv.org/abs/1607.02533" rel="nofollow noreferrer">https://arxiv.org/abs/1607.02533</a>
You can see in real life when you converted to images, all of the adversarial attack samples generated from attack will not work on in real world. it can explain as below "This could be explained by the fact that iterative methods exploit more subtle kind of
perturbations, and these subtle perturbations are more likely to be destroyed by photo transformation"</p>
<p>As example, your clean image has 127,200,55,..... you dividing into 255 (as it is 8bit png) and sending to you ML as (0.4980,0.78431,0.2156,...) . And deepfool is advanced attack method it added small perturb and change it to (0.498<strong>1</strong>,0.784<strong>1</strong>,0.215<strong>5</strong>...). Now this is adversarial sample which can fool your ML. but if you try to save it to 8bit png you will get again 127,200,55.. as you will multiply it by 255. So adversarial information is lost.</p>
<p>Simple put, you use deep fool method it added some perturb so small which essential not possible in real world 8bit png. </p> | 2020-01-30 05:07:40.257000+00:00 | 2020-01-30 05:07:40.257000+00:00 | null | null | 57,067,740 | <p>I am testing the adversarial sample attack using deepfool and sparsefool on mnist dataset. It did an attack on the preprocessed image data. However, when I save it into an image and then load it back, it fails attack.</p>
<p>I have test it using sparsefool and deepfool, and I think there are some precision problems when I save it into images. But I cannot figure it out how to implement it correctly. </p>
<pre><code>if __name__ == "__main__":
# pic_path = 'testSample/img_13.jpg'
pic_path = "./hacked.jpg"
model_file = './trained/'
image = Image.open(pic_path)
image_array = np.array(image)
# print(np.shape(image_array)) # 28*28
shape = (28, 28, 1)
projection = (0, 1)
image_norm = tf.cast(image_array / 255.0 - 0.5, tf.float32)
image_norm = np.reshape(image_norm, shape) # 28*28*1
image_norm = image_norm[tf.newaxis, ...] # 1*28*28*1
model = tf.saved_model.load(model_file)
print(np.argmax(model(image_norm)), "nnn")
# fool_img, r, pred_label, fool_label, loops = SparseFool(
# image_norm, projection, model)
print("pred_label", pred_label)
print("fool_label", np.argmax(model(fool_img)))
pert_image = np.reshape(fool_img, (28, 28))
# print(pert_image)
pert_image = np.copy(pert_image)
# np.savetxt("pert_image.txt", (pert_image + 0.5) * 255)
pert_image += 0.5
pert_image *= 255.
# shape = (28, 28, 1)
# projection = (0, 1)
# pert_image = tf.cast(((pert_image - 0.5) / 255.), tf.float32)
# image_norm = np.reshape(pert_image, shape) # 28*28*1
# image_norm = image_norm[tf.newaxis, ...] # 1*28*28*1
# print(np.argmax(model(image_norm)), "ffffnnn")
png = Image.fromarray(pert_image.astype(np.uint8))
png.save("./hacked.jpg")
</code></pre>
<p>It should attack 4 to 9, however, the saved image is still predicted into 4.</p>
<p>The full code project is shared on
<a href="https://drive.google.com/open?id=132_SosfQAET3c4FQ2I1RS3wXsT_4W5Mw" rel="nofollow noreferrer">https://drive.google.com/open?id=132_SosfQAET3c4FQ2I1RS3wXsT_4W5Mw</a></p> | 2019-07-17 02:27:01.083000+00:00 | 2020-01-30 05:07:40.257000+00:00 | null | image|generative-adversarial-network | ['https://arxiv.org/abs/1607.02533'] | 1 |
65,253,946 | <p>UNet is absent from the <a href="https://www.cityscapes-dataset.com/benchmarks/#scene-labeling-task" rel="nofollow noreferrer">benchmark</a> so i assume it is not adapted for this dataset (too slow and not enough performant probably). However, I advise you to start with <a href="https://arxiv.org/abs/1802.02611" rel="nofollow noreferrer">DeepLabv3+</a> from Google which is not so complicated and more adapted for this dataset.</p>
<p>You can use this <a href="https://github.com/bonlime/keras-deeplab-v3-plus" rel="nofollow noreferrer">repository</a> where it is implemented, well documented and useable with pretrained weights from cityscape dataset (and also <a href="http://host.robots.ox.ac.uk/pascal/VOC/" rel="nofollow noreferrer">PascalVOC</a> dataset).</p> | 2020-12-11 15:17:23.090000+00:00 | 2020-12-11 15:17:23.090000+00:00 | null | null | 65,253,040 | <p>I want to use a pre-trained Unet model using <a href="https://segmentation-models.readthedocs.io/en/latest/api.html" rel="nofollow noreferrer">segmentation_models</a> API for the <a href="https://www.cityscapes-dataset.com/" rel="nofollow noreferrer">Cityscapes</a> dataset, but I need the pre-trained weights for the same. Where can I find the pre-trained weights for a Unet model trained on the Cityscapes dataset?</p>
<p>Please guide me on this!!!</p> | 2020-12-11 14:20:36.453000+00:00 | 2020-12-11 15:17:23.090000+00:00 | null | tensorflow|keras|image-segmentation|pre-trained-model|semantic-segmentation | ['https://www.cityscapes-dataset.com/benchmarks/#scene-labeling-task', 'https://arxiv.org/abs/1802.02611', 'https://github.com/bonlime/keras-deeplab-v3-plus', 'http://host.robots.ox.ac.uk/pascal/VOC/'] | 4 |
32,670,296 | <p>a couple of hours seems a lot of time. Are you sure you are running on an optimized machine? Perhaps you could experiment on Linux and AWS EC2. Also check out <code>ranger</code> which has been out since a couple of weeks <a href="http://arxiv.org/abs/1508.04409" rel="nofollow noreferrer">http://arxiv.org/abs/1508.04409</a> and
<a href="https://cran.r-project.org/web/packages/ranger/index.html" rel="nofollow noreferrer">https://cran.r-project.org/web/packages/ranger/index.html</a></p>
<p>Also check <a href="https://stackoverflow.com/questions/14106010/parallel-execution-of-random-forest-in-r">parallel execution of random forest in R</a></p> | 2015-09-19 16:01:27.433000+00:00 | 2015-09-19 16:01:27.433000+00:00 | 2017-05-23 10:29:36.527000+00:00 | null | 32,669,927 | <p>I'm working with a very large set of data, about 120,000 rows and 34 columns. As you can well image, when using the R package randomForest, the program takes quite a number of hours to run, even on a powerful Windows server. </p>
<p>Although I am no expert in randomForest, I have a question about the proper use of the combine() function. </p>
<p>I seem to get conflicting answers when I researched this question online. Some say that you can only use combine() when using randomForest on the same set of data. Others say that you can just use combine().</p>
<p>What I'd like (hope, dream) to do is break up the 120,000 rows of data into 6 data frames, each containing 20,000 rows and perform randomForest on each of the 6 data frames. My hope is that I can use the combine() function to then combine the results of all 6 together. Is that possible? </p>
<p>Any help in this matter would be greatly appreciated. </p> | 2015-09-19 15:24:12.337000+00:00 | 2015-09-19 16:01:27.433000+00:00 | null | r|random-forest | ['http://arxiv.org/abs/1508.04409', 'https://cran.r-project.org/web/packages/ranger/index.html', 'https://stackoverflow.com/questions/14106010/parallel-execution-of-random-forest-in-r'] | 3 |
62,717,143 | <p>Although BERT wasn't specifically trained to find similarity between JSON data, you could always extract and concatenate the values of your JSON into a long sentence and leave it to BERT to capture the context as you expect.</p>
<p>Alternatively, you could generate a cosine similarity score for each key-value dependency between the JSONs and aggregate them to generate a net similarity score for the JSON data pair.</p>
<p>Also, see <a href="https://arxiv.org/abs/1908.10084" rel="nofollow noreferrer">Sentence-BERT (SBERT)</a>, a modification of the pre-trained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity.</p> | 2020-07-03 14:08:36.070000+00:00 | 2020-07-03 14:08:36.070000+00:00 | null | null | 62,677,633 | <p>I am trying to create one knowledge base (single source of truth) gathered from multiple web sources. (ex. wiki <-> fandom)</p>
<p>So I want to try a Siamese network or calculate cosine similarity with BERT embedded documents.</p>
<p>Then, can I ignore those json structures and train them anyway?</p> | 2020-07-01 13:11:14.353000+00:00 | 2020-07-03 14:08:36.070000+00:00 | 2020-07-03 04:51:20.890000+00:00 | machine-learning|deep-learning|cluster-analysis | ['https://arxiv.org/abs/1908.10084'] | 1 |
70,594,751 | <p>A close relative of this problem is studied in the literature under the name "<a href="https://en.wikipedia.org/wiki/Dynamic_connectivity#Decremental_connectivity" rel="nofollow noreferrer">decremental connectivity</a>". The main difference is that you want to enumerate the new connected components instead of being able to answer connectivity queries.</p>
<p><a href="https://arxiv.org/abs/2111.09376" rel="nofollow noreferrer">This paper</a> seems to be the theoretical state of the art, but having skimmed it, I don't think that the new ideas will be useful to you, since you have a graph that's</p>
<ul>
<li>Sparse</li>
<li>Likely too small for the asymptotic advantages to kick in</li>
<li>Right at that size where with a good representation it fits into L1 cache, but large enough that you can't also fit fancy data structures.</li>
</ul>
<p>My recommendation would be a simplified version of Even--Shiloach where the BFS part is omitted (it adds complexity and memory consumption but doesn't seem obviously winning in a sparse graph). The idea is to maintain</p>
<ul>
<li>the residual graph</li>
<li>a spanning forest of the residual graph</li>
<li>a map from vertices to labels representing the connected components.</li>
</ul>
<p>If an edge not in the forest gets deleted, then we just delete it. Otherwise, we pick one side of the tree that just got cut (doesn't matter for correctness which side, but you want the smaller one for efficiency; I recommend rooting the spanning trees and using the child), traverse it to provisionally update its label to something new, and then scan all of its edges to determine whether the edge that was just deleted is a bridge to the rest of the tree. If it is, great; traverse the other new tree for reporting the vertex sets. Otherwise, we have to fix up the labels and the tree structure.</p>
<p>For the data structures I recommend a compact adjacency list. Since there are at most 10,000 nodes, a node index will fit in a 16-bit <code>short</code>. There are up to 100,000 half-edges, unfortunately, so we'll use a 32-bit <code>int</code> to represent an edge index. We use an <code>int</code> array to point into a <code>short</code> array; entry <code>2*v</code> is the beginning of the adjacency list for node <code>v</code>, and entry <code>2*v+1</code> is the end. For cache locality we store two extra pieces of data alongside the adjacency list: the connected component label, and the number of descendants in the spanning forest. Order the spanning forest descendants before the other neighbors.</p>
<p>Overall here is what this looks like on a simple graph:</p>
<pre><code>Graph:
0
/ \
/ \
1-----3 2
Spanning forest:
0
/
/
1-----3 2
Arrays:
[| | | | | | | | ]
| | | \ | | | |
| | | \ \ | | \
| \ \ \ | | | |
v v v v v v v v
[0 1 1 3 x 0 1 3 0 2 0 x 0 0 0 1]
^ ^ \ /
| | |
| | adjacency list; descendants (3) before others (0)
| |
| number of spanning forest descendants
|
connected component label
= root of tree in spanning forest
</code></pre>
<p>I left <code>x</code>s to represent a previously deleted edge from <code>0</code> to <code>2</code>.</p>
<p>(I don't have a lot of experience with C#, so it's possible that the mandatory bounds checking will be a problem, but I can't imagine that pointers would be better, especially on a 64-bit machine.)</p> | 2022-01-05 14:39:48.593000+00:00 | 2022-01-05 14:39:48.593000+00:00 | null | null | 70,586,198 | <p>I am looking for a time-effective algorithm for this particular problem:</p>
<p>I have undirected graph with up to 10,000 vertexes and about 1-10 edges out of given vertex.</p>
<p>Now I will remove chosen edge from graph and I want to know if the edge I just removed was a bridge - and if so, what are the vertices connected on both sides. I will repeat the step of removing edge frequently, possibly till I get 10 000 disconnected vertices (each time I need the information of bridge). So, for example last edge should inform that it was a bridge with a single vertex on one side and a single vertex on the other.</p>
<p>Preprocessing of data is fully acceptable, the memory costs - within reasonable limits of modern PC - are okay. I am looking for an algorithm that optimize time of edge removal operation.</p>
<p>My tool of trade is C#, but any pseudo-code or idea I would gladly accept</p> | 2022-01-04 23:26:45.717000+00:00 | 2022-01-05 14:39:48.593000+00:00 | null | algorithm|optimization|graph|graph-algorithm | ['https://en.wikipedia.org/wiki/Dynamic_connectivity#Decremental_connectivity', 'https://arxiv.org/abs/2111.09376'] | 2 |
54,270,962 | <p>I am not familiar with the <a href="https://github.com/VeReMi-dataset" rel="nofollow noreferrer">VeReMi project</a>, so I do not know what value it is referring to as "the RSSI" when a frame is received. The accompanying <a href="https://arxiv.org/abs/1804.06701" rel="nofollow noreferrer">ArXiV paper</a> paper mentions no more details than that "the RSSI of the receiver" is logged on frame receptions.</p>
<p>Cursory inspection of the <a href="https://github.com/VeReMi-dataset/veins/blob/24e49cb4a140419d9cc88f0bbe54990c4ac63007/src/veins/modules/phy/Decider80211p.cc#L476" rel="nofollow noreferrer">code for logging the dataset you mentioned</a> shows that, on every reception of a frame, a method is called that <a href="https://github.com/VeReMi-dataset/veins/blob/24e49cb4a140419d9cc88f0bbe54990c4ac63007/src/veins/base/phyLayer/BaseDecider.cc#L271" rel="nofollow noreferrer">sums up the power levels of all transmissions currently present at the receiver</a>.</p>
<p>From this, it appears quite straightforward that (a) how far a frame traveled when it arrives at the receiver has only little relation to (b) the total amount of power experienced by the receiver at this time.</p>
<p>If you are interested in the Received Signal Strength (RSS) of every frame received, there is a much simpler path you can follow: Taking Veins version 5 alpha 1 as an example, your application layer can access the ControlInfo of a frame and, from there, its RSS, e.g., as follows:
<code>check_and_cast<DeciderResult80211*>(check_and_cast<PhyToMacControlInfo*>(wsm->getControlInfo())->getDeciderResult())->getRecvPower_dBm()</code>. The same approach should work for Veins 4.6 (which, I believe, the VeReMi dataset you are referring to is based) as well.</p>
<p>In simulations that only use <code>SimplePathlossModel</code>, Veins' version of a free space path loss model, this will result in the familiar curve:</p>
<p><a href="https://i.stack.imgur.com/D15zF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D15zF.png" alt="enter image description here"></a></p> | 2019-01-19 20:10:08.063000+00:00 | 2019-01-19 20:10:08.063000+00:00 | null | null | 54,238,799 | <p>We are working on an application based on Veins framework which needs RSSI value of received signal and the distance between sender and receiver. </p>
<p>We referred to the VeReMi project which also calculates RSSI value and sends it to upper level. </p>
<p>We compared our simulation result (RSSI vs Distance) with the VeReMi dataset and they look quite different. Can you help us to explain how RSSI is calculated and whether our result is normal?</p>
<p>In our application, we obtain the distance and rssi value by</p>
<pre><code>auto distance = sender.getPosition().distance(receiverPos);
auto senderRSSI = sender.getRssi();
</code></pre>
<p>In the lower level, the rssi is set in the Decider80211p::processSignalEnd(AirFrame* msg) method as in the VeReMi project.</p>
<pre><code>if (result->isSignalCorrect()) {
DBG_D11P << "packet was received correctly, it is now handed to upper layer...\n";
// go on with processing this AirFrame, send it to the Mac-Layer
WaveShortMessage* decap = dynamic_cast<WaveShortMessage*>(static_cast<Mac80211Pkt*>(frame->decapsulate())->decapsulate());
simtime_t start = frame->getSignal().getReceptionStart();
simtime_t end = frame->getSignal().getReceptionEnd();
double rssiValue = calcChannelSenseRSSI(start, end);
decap->setRSSI(rssiValue);
phy->sendUp(frame, result);
}
</code></pre>
<p>Regarding the simulation configuration, our config.xml differs from VeReMi and there is no the following lines in our case.</p>
<pre><code><AnalogueModel type="VehicleObstacleShadowing">
<parameter name="carrierFrequency" type="double" value="5.890e+9"/>
</AnalogueModel>.
</code></pre>
<p>The 11p specific parameters and NIP settings in the omnetpp.ini are the same.</p>
<p>Also, our simulation is based on Boston map. </p>
<p>The scatter plot of our simulation result of RSSI_vs_Distance is shown in the following figure.</p>
<p><a href="https://i.stack.imgur.com/3uoLD.jpg" rel="nofollow noreferrer">RSSI vs Distance from our simulation shows that even at distance beyond 1000 meters we still have received signal with strong RSSI values</a></p>
<p>In comparison, we extract data from VeReMi dataset and plot the RSSI vs Distance which is shown in following pic.</p>
<p><a href="https://i.stack.imgur.com/XPqnb.jpg" rel="nofollow noreferrer">VeReMi dataset RSSI vs Distance is what we were expecting where RSSI decreases as distance increases</a></p>
<p>Can you help us explain whether our result is normal and what may cause the issue we have now? Thanks!</p> | 2019-01-17 15:08:52.593000+00:00 | 2019-01-19 20:10:08.063000+00:00 | 2019-01-18 19:09:18.537000+00:00 | veins|rssi|sumo | ['https://github.com/VeReMi-dataset', 'https://arxiv.org/abs/1804.06701', 'https://github.com/VeReMi-dataset/veins/blob/24e49cb4a140419d9cc88f0bbe54990c4ac63007/src/veins/modules/phy/Decider80211p.cc#L476', 'https://github.com/VeReMi-dataset/veins/blob/24e49cb4a140419d9cc88f0bbe54990c4ac63007/src/veins/base/phyLayer/BaseDecider.cc#L271', 'https://i.stack.imgur.com/D15zF.png'] | 5 |
57,612,978 | <p>That's an interesting question. If I understand u correct, ur goal is to handle aleatoric (data inherent) uncertainty in a classification setting. </p>
<p>One option, as above, could be to apply Monte-Droput dropout (use dropout at training and leave turned on at inference to estimate variance). However it has been shown that this only models aleatoric uncertainty partially (<a href="https://arxiv.org/abs/1703.04977" rel="nofollow noreferrer">https://arxiv.org/abs/1703.04977</a>) and the quality may vary with expressiveness of ur model. If u go further down this road u may also check out this work (<a href="https://arxiv.org/abs/1908.00598" rel="nofollow noreferrer">https://arxiv.org/abs/1908.00598</a>) where the authors introduce error propagation through neural nets to eliminate sampling at inference time. Maybe the error propagation can be of interest to ur specific case.</p>
<p>More importantly however, some works use the entropy of the resulting softmax as an uncertainty estimate. This has been shown to fail for epistemic (model) uncertainty. However, without having a corresponding work on this at hand, I think it will perform decent for the aleatoric uncertainty, which u r trying to model.</p>
<p>What do u need to do? Train ur model on ur noisy dataset and afterwards the entropy of ur softmax should correlate with the aleatoric uncertainty. U can try it by plotting it against classification error. </p>
<p>Best</p> | 2019-08-22 15:44:08.603000+00:00 | 2019-08-22 15:44:08.603000+00:00 | null | null | 56,758,031 | <p>I have a classification neural network and nominal input data on which it is trained, however the input data has for each feature a systematic (up and down) uncertainty. How should the accuracy of the classifier be qualified and visualised using these input data uncertainties? I have a simple MWE example composed using the iris dataset; the intention is that is should be copy-pastable easily into a Jupyter notebook.</p>
<p>Lotsa imports:</p>
<pre><code>import numpy as np
import datetime
from IPython.display import SVG
from keras.datasets import mnist
from keras import activations
from keras import backend as K
from keras.layers import Dense, Input, concatenate, Conv1D, Conv2D, Dropout, MaxPooling1D, MaxPooling2D
from keras.layers import Dense, Flatten
from keras.models import Model, Sequential, load_model
from keras.utils import plot_model
from keras.utils.vis_utils import model_to_dot
from matplotlib import gridspec
from matplotlib.ticker import NullFormatter, NullLocator, MultipleLocator
from scipy import stats
from sklearn.datasets import load_iris
from sklearn.metrics import auc, roc_curve
from sklearn.model_selection import train_test_split
from vis.utils import utils
from vis.visualization import visualize_activation
from vis.visualization import visualize_saliency
import datetime
import keras
import matplotlib.pylab as plt
import pandas as pd
import random
import seaborn as sns
import talos as ta
sns.set_palette('husl')
sns.set(style='ticks')
import warnings
warnings.filterwarnings('ignore')
</code></pre>
<pre><code>%matplotlib inline
plt.rcParams['figure.figsize'] = [10, 10]
</code></pre>
<p>Let's load the iris dataset and limit it to two classes, then prepare it for training.</p>
<pre><code>iris = load_iris()
df = pd.DataFrame(
data = np.c_[iris['data'], iris['target']],
columns = iris['feature_names'] + ['target']
)
df = df.query('target != 2')
df.head()
df['labels'] = df['target'].astype('category').cat.codes
x = df[['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']]
y = df['target']
# Convert class vectors to binary class matrices using 1 hot encoding.
# 0 ---> 1, 0, 0
# 1 ---> 0, 1, 0
# 2 ---> 0, 0, 1
num_classes = len(y.unique())
y = keras.utils.to_categorical(y, len(y.unique()))
x = np.asarray(x)
y = np.asarray(y)
x = x.reshape(len(x), 4, 1)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.33, shuffle = True)
</code></pre>
<p>Let's make some simple model for classification.</p>
<pre><code>model = Sequential()
model.add(Dense(5, input_shape = (4, 1), activation = 'tanh'))
model.add(Dropout(rate=0.7))
model.add(Flatten())
model.add(Dense(5, activation = 'tanh'))
model.add(Dense(num_classes, activation = 'softmax', name = 'preds'))
model.compile(loss = "categorical_crossentropy", optimizer = "nadam", metrics = ['accuracy'])
model.summary()
SVG(model_to_dot(model).create(prog='dot', format='svg'))
</code></pre>
<p>Now for a quick bit of training...</p>
<pre><code>%%time
def model_evaluation(model, x_test, y_test, verbose=False):
score = model.evaluate(x_test, y_test, verbose=verbose)
print('max. test accuracy observed:', max(model.history.history['val_acc']))
print('max. test accuracy history index:', model.history.history['val_acc'].index(max(model.history.history['val_acc'])))
plt.plot(model.history.history['acc'])
plt.plot(model.history.history['val_acc'])
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train_accuracy', 'test_accuracy'], loc='best')
plt.show()
model.fit(
x_train,
y_train,
batch_size = 2,
epochs = 100,
verbose = False,
validation_data = (x_test, y_test),
)
model_evaluation(model, x_test, y_test, verbose=False)
</code></pre>
<p>Now, let's add some uncertainties for each of the features:</p>
<pre><code>for column in ['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']:
uncertainties_up = 0.1 * df[column].mean() * np.random.random_sample(size=(len(df)))
uncertainties_down = df[column].mean() * np.random.random_sample(size=(len(df)))
df[column + " uncertainty up"] = df[column] + uncertainties_up
df.head()
</code></pre>
<p><em>And now</em> what actually comes next, in order to qualify the classifier given these various input data uncertainties?</p> | 2019-06-25 16:02:55.350000+00:00 | 2019-08-22 15:44:08.603000+00:00 | null | machine-learning|keras|classification|uncertainty | ['https://arxiv.org/abs/1703.04977', 'https://arxiv.org/abs/1908.00598'] | 2 |
70,711,620 | <p>For the hereafter, after a lot of trial and error, the following full code for the Autoencoder seems to work very well. Getting the packing and unpacking to work correctly was the main hurdle. The clue is, I think, to try to utilize the LSTM modules for what they're worth by using the <code>proj_size</code>, <code>num_layers</code>, and <code>dropout</code> parameters.</p>
<pre class="lang-py prettyprint-override"><code>class EncoderV4(nn.Module):
def __init__(
self, seq_len, n_features, embedding_dim, hidden_dim, dropout, num_layers
):
super().__init__()
self.seq_len = seq_len
self.n_features = n_features
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.num_layers = num_layers
self.lstm1 = nn.LSTM(
input_size=n_features,
hidden_size=self.hidden_dim,
num_layers=num_layers,
batch_first=True,
dropout=dropout,
proj_size=self.embedding_dim,
)
def forward(self, x):
_, (h_n, _) = self.lstm1(x)
return h_n[-1].unsqueeze(1)
class DecoderV4(nn.Module):
def __init__(self, seq_len, input_dim, hidden_dim, n_features, num_layers):
super().__init__()
self.seq_len = seq_len
self.input_dim = input_dim
self.hidden_dim = hidden_dim
self.n_features = n_features
self.num_layers = num_layers
self.lstm1 = nn.LSTM(
input_size=input_dim,
hidden_size=hidden_dim,
num_layers=num_layers,
proj_size=n_features,
batch_first=True,
)
def forward(self, x, lens):
x = x.repeat(1, self.seq_len, 1)
x = pack_padded_sequence(x, lens, batch_first=True, enforce_sorted=False)
x, _ = self.lstm1(x)
return x
class RecurrentAutoencoderV4(nn.Module):
def __init__(
self, seq_len, n_features, embedding_dim, hidden_dim, dropout, num_layers
):
super().__init__()
self.encoder = EncoderV4(
seq_len, n_features, embedding_dim, hidden_dim, dropout, num_layers
)
self.decoder = DecoderV4(
seq_len, embedding_dim, hidden_dim, n_features, num_layers
)
def forward(self, x, lens):
x = self.encoder(x)
x = self.decoder(x, lens)
return x
</code></pre>
<p>The full code and a paper using this Autoencoder can be found at <a href="https://github.com/Krankile/ensemble_forecasting/blob/main/models/lstm_ae.py" rel="nofollow noreferrer">GitHub</a> and <a href="https://arxiv.org/abs/2201.00426" rel="nofollow noreferrer">arXiv</a>, respectively.</p> | 2022-01-14 13:51:26.813000+00:00 | 2022-01-14 13:51:26.813000+00:00 | null | null | 69,864,893 | <p>I'm creating an LSTM Autoencoder for feature extraction for my master's thesis. However, I'm having a lot of trouble with combining dropout with LSTM layers.</p>
<p>Since it's an Autoencoder, I'm having a bottleneck which is achieved by having two separate LSTM layers, each with num_layers=1, and a dropout in between. I have time series with very different lengths and have found packed sequences to be a good idea for that reason.</p>
<p>But, from my experiments, I must pack the data before the first LSTM, unpack before the dropout, then pack again before the second LSTM. This seems wildly inefficient. Is there a better way? I'm providing some example code and an alternative way to implement it below.</p>
<p>Current, working, but possibly suboptimal solution:</p>
<pre class="lang-py prettyprint-override"><code>class Encoder(nn.Module):
def __init__(self, seq_len, n_features, embedding_dim, hidden_dim, dropout):
super(Encoder, self).__init__()
self.seq_len = seq_len
self.n_features = n_features
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.lstm1 = nn.LSTM(
input_size=n_features,
hidden_size=self.hidden_dim,
num_layers=1,
batch_first=True,
)
self.lstm2 = nn.LSTM(
input_size=self.hidden_dim,
hidden_size=embedding_dim,
num_layers=1,
batch_first=True,
)
self.drop1 = nn.Dropout(p=dropout, inplace=False)
def forward(self, x):
x, (_, _) = self.lstm1(x)
x, lens = pad_packed_sequence(x, batch_first=True, total_length=self.seq_len)
x = self.drop1(x)
x = pack_padded_sequence(x, lens, batch_first=True, enforce_sorted=False)
x, (hidden_n, _) = self.lstm2(x)
return hidden_n.reshape((-1, self.n_features, self.embedding_dim)), lens
</code></pre>
<p>Alternative, possibly better, but currently not working solution;</p>
<pre class="lang-py prettyprint-override"><code>class Encoder2(nn.Module):
def __init__(self, seq_len, n_features, embedding_dim, hidden_dim, dropout):
super(Encoder2, self).__init__()
self.seq_len = seq_len
self.n_features = n_features
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.lstm1 = nn.LSTM(
input_size=n_features,
hidden_size=self.hidden_dim,
num_layers=2,
batch_first=True,
dropout=dropout,
proj_size=self.embedding_dim,
)
def forward(self, x):
_, (h_n, _) = self.lstm1(x)
return h_n[-1].unsqueeze(1), lens
</code></pre>
<p>Any help and tips about working with time-series, packed sequences, lstm-cells and dropout would be immensely appreciated, as I'm not finding much documentation/guidance elsewhere on the internet. Thank you!</p>
<p>Best, Lars Ankile</p> | 2021-11-06 14:33:11.800000+00:00 | 2022-01-14 13:51:26.813000+00:00 | null | pytorch|time-series|lstm|autoencoder|dropout | ['https://github.com/Krankile/ensemble_forecasting/blob/main/models/lstm_ae.py', 'https://arxiv.org/abs/2201.00426'] | 2 |
50,047,534 | <p>Tensorflows <a href="https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer" rel="nofollow noreferrer">Adam</a> implementation is just that: An implementation of <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">Adam</a>, exactly how it is defined and tested in the paper.</p>
<p>If you want to use Adam with L2 regularization for your problem you simply have to add an L2 regularization term to your loss with some regularization strength you can choose yourself.</p>
<p>I can't tell you if that is necessary or helpful or what regularization and regularization strength to use, because that highly depends on the problem and is rather subjective.</p> | 2018-04-26 16:05:47.520000+00:00 | 2018-04-27 06:45:25.840000+00:00 | 2018-04-27 06:45:25.840000+00:00 | null | 50,045,039 | <p>Tensorflow's implementation of <a href="https://www.tensorflow.org/api_docs/python/tf/train/Optimizer" rel="nofollow noreferrer">AdamOptimzer</a> do not have regularization params like that in <a href="https://www.tensorflow.org/api_docs/python/tf/train/ProximalAdagradOptimizer" rel="nofollow noreferrer">ProximalAdamOptimizer</a>, for example <code>l2_regularization_strength</code>, is it necessary to add l2 norm in <a href="https://www.tensorflow.org/api_docs/python/tf/train/Optimizer" rel="nofollow noreferrer">AdamOptimzer</a>?</p> | 2018-04-26 14:02:55.690000+00:00 | 2018-04-27 06:45:25.840000+00:00 | null | tensorflow|optimization | ['https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer', 'https://arxiv.org/abs/1412.6980'] | 2 |
43,310,726 | <p>Not sure what you mean by proper documentation. This is an implementation of the paper (<a href="https://arxiv.org/pdf/1504.08083.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1504.08083.pdf</a>). Looks like you are trying to generate ROI's. Can you look through the helper functions as documented at the site to parse what you might need:</p>
<p>To run the toy example, make sure that in PARAMETERS.py the datasetName is set to "grocery".</p>
<ul>
<li><p>Run <code>A1_GenerateInputROIs.py</code> to generate the input ROIs for training and testing.</p></li>
<li><p>Run <code>A2_RunCntk_py3.py</code> to train a Fast R-CNN model using the CNTK Python API and compute test results.</p></li>
</ul>
<p>The algo will work on several candidate regions and then generate outputs: one for the classes of objects and another one that generates the bounding boxes for the objects belonging to those classes. Please refer to the code for getting the details of the implementation. </p> | 2017-04-09 19:05:27.083000+00:00 | 2017-04-09 19:05:27.083000+00:00 | null | null | 43,190,575 | <p>I am very new to CNTK.
I wanted to train a set of images (to detect objects like alcohol glasses/bottles) using CNTK - ResNet/Fast-R CNN.</p>
<p>I am trying to follow below documentation from GitHub; However, it does not appear to be a straight forward procedure. <a href="https://github.com/Microsoft/CNTK/wiki/Object-Detection-using-Fast-R-CNN" rel="nofollow noreferrer">https://github.com/Microsoft/CNTK/wiki/Object-Detection-using-Fast-R-CNN</a></p>
<p>I cannot find proper documentation to generate ROI's for the images with different sizes and shapes. And how to create object labels based on the trained models? Can someone point out to a proper documentation or training link using which I can work on the cntk model? Please see the attached image in which I was able to load a sample image with default ROI's in the script. How do I properly set the size and label the object in the image ? Thanks in advance!</p>
<p><a href="https://i.stack.imgur.com/HDxzZ.jpg" rel="nofollow noreferrer">sample image loaded for training</a></p> | 2017-04-03 17:14:32.810000+00:00 | 2018-01-25 15:46:14.553000+00:00 | null | object-detection|cntk|resnet | ['https://arxiv.org/pdf/1504.08083.pdf'] | 1 |
2,217,337 | <p>Your final sum will mostly be dominated by the largest addend. The simplest algorithm to exploit this could go like this (I cannot prove this):</p>
<ol>
<li>sort points descending by their nearest-neighbor distance</li>
<li>form pair of first entry and its nearest neighbor</li>
<li>remove pair from list</li>
<li>if list not empty goto 1.</li>
</ol>
<p>This should work very often.</p>
<p>Since you are essentially looking for a clustering algorithm for clusters of 2 <a href="http://arxiv.org/abs/0706.2569" rel="nofollow noreferrer">this link</a> or a search for <a href="http://scholar.google.com/scholar?hl=en&q=+hadron+clustering+algorithm+shower" rel="nofollow noreferrer">clustering algorithms for jet reconstruction</a> might be interesting. People in the experimental particle physics community are working on heuristic algorithms for problems like this.</p> | 2010-02-07 16:14:33.380000+00:00 | 2010-02-07 16:14:33.380000+00:00 | null | null | 2,217,206 | <p>Given <strong>2N-points</strong> in a <strong>2D-plane</strong>, you have to group them into <strong>N pairs</strong> such that the overall sum of distances between the points of all of the pairs is the minimum possible value.<strong>The desired output is only the sum.</strong></p>
<p>In other words, if <strong>a1,a2,..an</strong> are the distances between points of first, second...and nth pair respectively, then <strong>(a1+a2+...an) should be minimum.</strong></p>
<p>Let us consider this test-case, if the <strong>2*5</strong> points are :
<strong>{20,20},
{40, 20},
{10, 10},
{2, 2},
{240, 6},
{12, 12},
{100, 120},
{6, 48},
{12, 18},
{0, 0}</strong></p>
<p>The desired output is <strong>237</strong>.</p>
<p>This is not my homework,I am inquisitive about different approaches rather than brute-force.</p> | 2010-02-07 15:21:28.623000+00:00 | 2010-02-10 12:17:11.963000+00:00 | 2010-02-07 16:22:43.500000+00:00 | c++|c|algorithm|math | ['http://arxiv.org/abs/0706.2569', 'http://scholar.google.com/scholar?hl=en&q=+hadron+clustering+algorithm+shower'] | 2 |
18,999,170 | <p>You can use the <a href="http://scikit-learn.org/stable/modules/decomposition.html#truncated-singular-value-decomposition-and-latent-semantic-analysis">TruncatedSVD</a> transformer from sklearn 0.14+: you call it with <code>fit_transform</code> on your database of documents and then call the <code>transform</code> method (from the same <code>TruncatedSVD</code> method) on the query document and then can compute the cosine similarity of the transformed query documents with the transformed database with the function: <code>sklearn.metrics.pairwise.cosine_similarity</code> and <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html">numpy.argsort</a> the result to find the index of most similar document.</p>
<p>Note that under the hood, scikit-learn also uses NumPy but in a more efficient way than the snippet you gave (by using the <a href="http://arxiv.org/abs/0909.4061">Randomized SVD</a> trick by Halko, Martinsson and Tropp).</p> | 2013-09-25 07:49:39.967000+00:00 | 2013-09-25 07:49:39.967000+00:00 | null | null | 18,997,905 | <p>I am trying to write a script where I will calculate the similarity of few documents. I want to do it by using LSA. I have found the following code and change it a bit. I has as an input 3 documents and then as output a 3x3 matrix with the similarity between them. I want to do the same similarity calculation but only with sklearn library. Is that possible?</p>
<pre><code>from numpy import zeros
from scipy.linalg import svd
from math import log
from numpy import asarray, sum
from nltk.corpus import stopwords
from sklearn.metrics.pairwise import cosine_similarity
titles = [doc1,doc2,doc3]
ignorechars = ''',:'!'''
class LSA(object):
def __init__(self, stopwords, ignorechars):
self.stopwords = stopwords.words('english')
self.ignorechars = ignorechars
self.wdict = {}
self.dcount = 0
def parse(self, doc):
words = doc.split();
for w in words:
w = w.lower()
if w in self.stopwords:
continue
elif w in self.wdict:
self.wdict[w].append(self.dcount)
else:
self.wdict[w] = [self.dcount]
self.dcount += 1
def build(self):
self.keys = [k for k in self.wdict.keys() if len(self.wdict[k]) > 1]
self.keys.sort()
self.A = zeros([len(self.keys), self.dcount])
for i, k in enumerate(self.keys):
for d in self.wdict[k]:
self.A[i,d] += 1
def calc(self):
self.U, self.S, self.Vt = svd(self.A)
return -1*self.Vt
def TFIDF(self):
WordsPerDoc = sum(self.A, axis=0)
DocsPerWord = sum(asarray(self.A > 0, 'i'), axis=1)
rows, cols = self.A.shape
for i in range(rows):
for j in range(cols):
self.A[i,j] = (self.A[i,j] / WordsPerDoc[j]) * log(float(cols) / DocsPerWord[i])
mylsa = LSA(stopwords, ignorechars)
for t in titles:
mylsa.parse(t)
mylsa.build()
a = mylsa.calc()
cosine_similarity(a)
</code></pre>
<p><strong>From @ogrisel's answer:</strong></p>
<p>I run the following code, but my mouth is still open :) When TFIDF has max 80% similarity on two documents with the same subject, this code give me 99.99%. That's why I think that it is something wrong :P</p>
<pre><code>dataset = [doc1,doc2,doc3]
vectorizer = TfidfVectorizer(max_df=0.5,stop_words='english')
X = vectorizer.fit_transform(dataset)
lsa = TruncatedSVD()
X = lsa.fit_transform(X)
X = Normalizer(copy=False).fit_transform(X)
cosine_similarity(X)
</code></pre> | 2013-09-25 06:38:50.727000+00:00 | 2013-09-25 11:20:56.763000+00:00 | 2013-09-25 11:20:56.763000+00:00 | python|scikit-learn | ['http://scikit-learn.org/stable/modules/decomposition.html#truncated-singular-value-decomposition-and-latent-semantic-analysis', 'http://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.html', 'http://arxiv.org/abs/0909.4061'] | 3 |
46,751,549 | <p>I think you may use <a href="https://arxiv.org/abs/1409.0473" rel="nofollow noreferrer">attention mechanism</a> to convert the variable-length inputs to some fixed length tensor before you feed them into a feed forward network. </p> | 2017-10-15 04:00:23.530000+00:00 | 2017-10-15 04:00:23.530000+00:00 | null | null | 46,748,865 | <p>Assume I have a lists of inputs of different sizes, for example, some are of the shape[10,9,5] some are [7,6,5], I have to pad 0s to feed them into tensor flow with the same size, say [10,9,5], I need to do matrix multiplication and add the biases during the forward process which will introduce numbers in the padded 0 positions. So I have to create a mask matrix by myself to mask them? Or is there an easier way from tensorflow? Thanks!</p>
<p>BTW, I'm not feeding sequences nor using rnn. so I cannot use dynamic rnn</p> | 2017-10-14 20:07:01.640000+00:00 | 2017-10-15 04:00:23.530000+00:00 | null | tensorflow|mask|feed-forward | ['https://arxiv.org/abs/1409.0473'] | 1 |
Subsets and Splits