a_id
int64 7.84k
73.8M
| a_body
stringlengths 61
33k
| a_creation_date
stringlengths 25
32
| a_last_activity_date
stringlengths 25
32
| a_last_edit_date
stringlengths 25
32
⌀ | a_tags
float64 | q_id
int64 826
73.8M
| q_body
stringlengths 61
29.9k
| q_creation_date
stringlengths 25
32
| q_last_activity_date
stringlengths 25
32
| q_last_edit_date
stringlengths 25
32
⌀ | q_tags
stringlengths 1
103
| _arxiv_links
stringlengths 2
6.69k
| _n_arxiv_links
int64 0
94
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
17,171,271 | <p>It appears it has. I found two papers <a href="http://arxiv.org/abs/1211.0498" rel="nofollow">here</a> and <a href="http://acl.ldc.upenn.edu/N/N07/N07-2024.pdf" rel="nofollow">here</a>, and there are probably other. The second one was published at NAACL (a high-quality conference) and contains a bunch of relevant references.</p> | 2013-06-18 14:17:06.287000+00:00 | 2013-06-18 14:17:06.287000+00:00 | null | null | 17,165,785 | <p>Are there any code samples or papers on the subject? I have not been able to find any resources directly related to the question after a bit of research.</p> | 2013-06-18 09:52:39.020000+00:00 | 2013-06-18 14:17:06.287000+00:00 | null | nlp | ['http://arxiv.org/abs/1211.0498', 'http://acl.ldc.upenn.edu/N/N07/N07-2024.pdf'] | 2 |
46,605,754 | <p>AFAIK, the nature of Random Forest method is <em>highly</em> data-dependent and the method is sensitive both to the random seed and noise in the data. Therefore, changing the dataset to a different one with a different characteristics of noise and class separability might be producing mediocre results even when it worked perfectly for another dataset.</p>
<p>There is also a factor of pure chance in the <em>random</em> part of the method... Hence, any results achieved should be repeated for validation. It may be just a bad luck of this particular run although your results suggest that the method is just ill-suited for the dataset.</p>
<p>If you really need to dive into the topic of Random Forest I would suggest a thorough summary in (freely available) <a href="https://arxiv.org/pdf/1407.7502.pdf" rel="nofollow noreferrer">Understanding Random Forests: From Theory to Practice</a> by Gilles Louppe.</p>
<p>There is also an interesting discussion on outliers' sensitivity of the method on <a href="https://stats.stackexchange.com/questions/187200/how-are-random-forests-not-sensitive-to-outliers">CrossValidated</a> forum.</p> | 2017-10-06 12:27:44.230000+00:00 | 2017-10-06 12:33:48.600000+00:00 | 2017-10-06 12:33:48.600000+00:00 | null | 46,603,778 | <p>I use the randomforest code based on <a href="https://machinelearningmastery.com/implement-random-forest-scratch-python/" rel="nofollow noreferrer">here</a>.
Here it is (skip over to the end to see the question):</p>
<pre><code># Select the best split point for a dataset
def get_split(dataset, n_features):
class_values = list(set(row[-1] for row in dataset))
b_index, b_value, b_score, b_groups = 999, 999, 999, None
features = list()
while len(features) < n_features:
index = randrange(len(dataset[0])-1)
if index not in features:
features.append(index)
for index in features:
for row in dataset:
groups = test_split(index, row[index], dataset)
gini = gini_index(groups, class_values)
if gini < b_score:
b_index, b_value, b_score, b_groups = index, row[index], gini, groups
return {'index':b_index, 'value':b_value, 'groups':b_groups}
# Random Forest Algorithm on Sonar Dataset
from random import seed
from random import randrange
from csv import reader
from math import sqrt
# Load a CSV file
def load_csv(filename):
dataset = list()
with open(filename, 'r') as file:
csv_reader = reader(file)
for row in csv_reader:
if not row:
continue
dataset.append(row)
return dataset
# Convert string column to float
def str_column_to_float(dataset, column):
for row in dataset:
row[column] = float(row[column].strip())
# Convert string column to integer
def str_column_to_int(dataset, column):
class_values = [row[column] for row in dataset]
unique = set(class_values)
lookup = dict()
for i, value in enumerate(unique):
lookup[value] = i
for row in dataset:
row[column] = lookup[row[column]]
return lookup
# Split a dataset into k folds
def cross_validation_split(dataset, n_folds):
dataset_split = list()
dataset_copy = list(dataset)
fold_size = int(len(dataset) / n_folds)
for i in range(n_folds):
fold = list()
while len(fold) < fold_size:
index = randrange(len(dataset_copy))
fold.append(dataset_copy.pop(index))
dataset_split.append(fold)
return dataset_split
# Calculate accuracy percentage
def accuracy_metric(actual, predicted):
correct = 0
for i in range(len(actual)):
if actual[i] == predicted[i]:
correct += 1
return correct / float(len(actual)) * 100.0
# Evaluate an algorithm using a cross validation split
def evaluate_algorithm(dataset, algorithm, n_folds, *args):
folds = cross_validation_split(dataset, n_folds)
scores = list()
for fold in folds:
train_set = list(folds)
train_set.remove(fold)
train_set = sum(train_set, [])
test_set = list()
for row in fold:
row_copy = list(row)
test_set.append(row_copy)
row_copy[-1] = None
predicted = algorithm(train_set, test_set, *args)
actual = [row[-1] for row in fold]
accuracy = accuracy_metric(actual, predicted)
scores.append(accuracy)
return scores
# Split a dataset based on an attribute and an attribute value
def test_split(index, value, dataset):
left, right = list(), list()
for row in dataset:
if row[index] < value:
left.append(row)
else:
right.append(row)
return left, right
# Calculate the Gini index for a split dataset
def gini_index(groups, classes):
# count all samples at split point
n_instances = float(sum([len(group) for group in groups]))
# sum weighted Gini index for each group
gini = 0.0
for group in groups:
size = float(len(group))
# avoid divide by zero
if size == 0:
continue
score = 0.0
# score the group based on the score for each class
for class_val in classes:
p = [row[-1] for row in group].count(class_val) / size
score += p * p
# weight the group score by its relative size
gini += (1.0 - score) * (size / n_instances)
return gini
# Select the best split point for a dataset
def get_split(dataset, n_features):
class_values = list(set(row[-1] for row in dataset))
b_index, b_value, b_score, b_groups = 999, 999, 999, None
features = list()
while len(features) < n_features:
index = randrange(len(dataset[0]) - 1)
if index not in features:
features.append(index)
for index in features:
for row in dataset:
groups = test_split(index, row[index], dataset)
gini = gini_index(groups, class_values)
if gini < b_score:
b_index, b_value, b_score, b_groups = index, row[index], gini, groups
return {'index': b_index, 'value': b_value, 'groups': b_groups}
# Create a terminal node value
def to_terminal(group):
outcomes = [row[-1] for row in group]
return max(set(outcomes), key=outcomes.count)
# Create child splits for a node or make terminal
def split(node, max_depth, min_size, n_features, depth):
left, right = node['groups']
del (node['groups'])
# check for a no split
if not left or not right:
node['left'] = node['right'] = to_terminal(left + right)
return
# check for max depth
if depth >= max_depth:
node['left'], node['right'] = to_terminal(left), to_terminal(right)
return
# process left child
if len(left) <= min_size:
node['left'] = to_terminal(left)
else:
node['left'] = get_split(left, n_features)
split(node['left'], max_depth, min_size, n_features, depth + 1)
# process right child
if len(right) <= min_size:
node['right'] = to_terminal(right)
else:
node['right'] = get_split(right, n_features)
split(node['right'], max_depth, min_size, n_features, depth + 1)
# Build a decision tree
def build_tree(train, max_depth, min_size, n_features):
root = get_split(train, n_features)
split(root, max_depth, min_size, n_features, 1)
return root
# Make a prediction with a decision tree
def predict(node, row):
if row[node['index']] < node['value']:
if isinstance(node['left'], dict):
return predict(node['left'], row)
else:
return node['left']
else:
if isinstance(node['right'], dict):
return predict(node['right'], row)
else:
return node['right']
# Create a random subsample from the dataset with replacement
def subsample(dataset, ratio):
sample = list()
n_sample = round(len(dataset) * ratio)
while len(sample) < n_sample:
index = randrange(len(dataset))
sample.append(dataset[index])
return sample
# Make a prediction with a list of bagged trees
def bagging_predict(trees, row):
predictions = [predict(tree, row) for tree in trees]
return max(set(predictions), key=predictions.count)
# Random Forest Algorithm
def random_forest(train, test, max_depth, min_size, sample_size, n_trees, n_features):
trees = list()
for i in range(n_trees):
sample = subsample(train, sample_size)
tree = build_tree(sample, max_depth, min_size, n_features)
trees.append(tree)
predictions = [bagging_predict(trees, row) for row in test]
return (predictions)
</code></pre>
<p>In order to generalize it so it will be run for every dataset I wrote the following:</p>
<pre><code>import pandas as pd
file_path ='http://archive.ics.uci.edu/ml/machine-learning-databases/undocumented/connectionist-bench/sonar/sonar.all-data'
dataset2 =pd.read_csv(file_path, header=None, sep=',')
v = dataset2.values
f = pd.factorize(v.ravel())[0].reshape(v.shape)
dataset1 = pd.DataFrame(f)
df = dataset1.astype('str')
dataset = df.values.tolist()
target_index = 60
for i in range(0, len(dataset[0])):
if i != target_index:
str_column_to_float(dataset, i)
# convert class column to integers
str_column_to_int(dataset, target_index)
n_folds = 5
max_depth = 10
min_size = 1
sample_size = 1.0
n_features = int(sqrt(len(dataset[0]) - 1))
for n_trees in [5]:
scores = evaluate_algorithm(dataset, random_forest, n_folds, max_depth, min_size, sample_size, n_trees, n_features)
print('Trees: %d' % n_trees)
print('Scores: %s' % scores)
print('Mean Accuracy: %.3f%%' % (sum(scores) / float(len(scores))))
</code></pre>
<p>The above-mentioned code works great for SONAR dataset. It structure is:</p>
<pre><code>0.0200,0.0371,0.0428,0.0207,0.0954,0.0986,0.1539,0.1601,0.3109,0.2111,0.1609,0.1582,0.2238,0.0645,0.0660,0.2273,0.3100,0.2999,0.5078,0.4797,0.5783,0.5071,0.4328,0.5550,0.6711,0.6415,0.7104,0.8080,0.6791,0.3857,0.1307,0.2604,0.5121,0.7547,0.8537,0.8507,0.6692,0.6097,0.4943,0.2744,0.0510,0.2834,0.2825,0.4256,0.2641,0.1386,0.1051,0.1343,0.0383,0.0324,0.0232,0.0027,0.0065,0.0159,0.0072,0.0167,0.0180,0.0084,0.0090,0.0032,R
0.0453,0.0523,0.0843,0.0689,0.1183,0.2583,0.2156,0.3481,0.3337,0.2872,0.4918,0.6552,0.6919,0.7797,0.7464,0.9444,1.0000,0.8874,0.8024,0.7818,0.5212,0.4052,0.3957,0.3914,0.3250,0.3200,0.3271,0.2767,0.4423,0.2028,0.3788,0.2947,0.1984,0.2341,0.1306,0.4182,0.3835,0.1057,0.1840,0.1970,0.1674,0.0583,0.1401,0.1628,0.0621,0.0203,0.0530,0.0742,0.0409,0.0061,0.0125,0.0084,0.0089,0.0048,0.0094,0.0191,0.0140,0.0049,0.0052,0.0044,R
</code></pre>
<p>These are the results (that seems OK):</p>
<pre><code>Trees: 5
Scores: [100.0, 95.1219512195122, 100.0, 97.5609756097561, 100.0]
Mean Accuracy: 98.537%
</code></pre>
<p>When I change the dataset into breast-cancer-wisconsin:</p>
<pre><code>842302,M,17.99,10.38,122.8,1001,0.1184,0.2776,0.3001,0.1471,0.2419,0.07871,1.095,0.9053,8.589,153.4,0.006399,0.04904,0.05373,0.01587,0.03003,0.006193,25.38,17.33,184.6,2019,0.1622,0.6656,0.7119,0.2654,0.4601,0.1189
842517,M,20.57,17.77,132.9,1326,0.08474,0.07864,0.0869,0.07017,0.1812,0.05667,0.5435,0.7339,3.398,74.08,0.005225,0.01308,0.0186,0.0134,0.01389,0.003532,24.99,23.41,158.8,1956,0.1238,0.1866,0.2416,0.186,0.275,0.08902
</code></pre>
<p>I change the relevant code into:</p>
<pre><code>import pandas as pd
file_path ='https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/wdbc.data'
dataset2 =pd.read_csv(file_path, header=None, sep=',')
v = dataset2.values
f = pd.factorize(v.ravel())[0].reshape(v.shape)
dataset1 = pd.DataFrame(f)
df = dataset1.astype('str')
dataset = df.values.tolist()
target_index = 1 ## <----
for i in range(0, len(dataset[0])):
if i != target_index:
str_column_to_float(dataset, i)
# convert class column to integers
str_column_to_int(dataset, target_index)
n_folds = 5
max_depth = 10
min_size = 1
sample_size = 1.0
n_features = int(sqrt(len(dataset[0]) - 1))
for n_trees in [5]:
scores = evaluate_algorithm(dataset, random_forest, n_folds, max_depth, min_size, sample_size, n_trees, n_features)
print('Trees: %d' % n_trees)
print('Scores: %s' % scores)
print('Mean Accuracy: %.3f%%' % (sum(scores) / float(len(scores))))
</code></pre>
<p>I runs for a very long time and the results seems wrong:</p>
<pre><code>Trees: 5
Scores: [0.0, 0.0, 0.0, 0.8849557522123894, 0.0]
Mean Accuracy: 0.177%
</code></pre> | 2017-10-06 10:33:30.087000+00:00 | 2017-10-06 12:33:48.600000+00:00 | null | python|dataset|random-forest | ['https://arxiv.org/pdf/1407.7502.pdf', 'https://stats.stackexchange.com/questions/187200/how-are-random-forests-not-sensitive-to-outliers'] | 2 |
47,784,194 | <p>TL;DR compute the receptive field ignoring all skip connections.</p>
<p>First, in a general case, let's say we have two branches of data flow - A and B. You can compute the receptive field for branches A and B independently, and then simply take the maximum when the branches merge. (The reason you can take the max is that branches typically merge via channels concatenation.)</p>
<p>Now, when one branch is a skip connection, and the other is not, the one which is not, gives the larger receptive field. If you have many skip connections, the longest route (with no skip connections) would give the maximum receptive field. Hence the result in TL;DR.</p>
<hr>
<p>Getting the maximum among branches becomes more complicated if instead of a simple skip connection you have something like an <a href="https://arxiv.org/abs/1409.4842" rel="nofollow noreferrer">inception block</a>.
In those cases, you may want to <a href="http://kukuruza.github.io/receptive_field" rel="nofollow noreferrer">compute the receptive field directly by definition</a>.</p> | 2017-12-13 01:30:39.673000+00:00 | 2017-12-13 01:30:39.673000+00:00 | null | null | 47,332,379 | <p>Although there are many resources about how to calculate the receptive field (RF) of CNNs (ex: <a href="http://fomoro.com/tools/receptive-fields" rel="nofollow noreferrer">http://fomoro.com/tools/receptive-fields</a>), I didn't find anything regarding skip connections. In [1] they mention that skip connections make the effective RF smaller, but what happens to the theoretical RF?</p>
<p>At the end of the day, <strong>I would like to know how to calculate the receptive field of a network comprising many residual blocks</strong>.</p>
<p>Thanks,
Daniel</p> | 2017-11-16 14:36:29.363000+00:00 | 2017-12-13 01:30:39.673000+00:00 | null | conv-neural-network|convolution | ['https://arxiv.org/abs/1409.4842', 'http://kukuruza.github.io/receptive_field'] | 2 |
37,498,823 | <p>Note: the original answer was completely redefined, because I misinterpreted the definition of the problem.</p>
<p>The evaluation of the kernel distance between <code>Xi</code> and <code>Xj</code> is presented below. Two codes that implement the algorithm are presented. The first code is inefficient but can be easily related to the definition of the kernel distance. The second code is much more efficient, but may not be as clear due to several vectorisation tricks.</p>
<p>The code assumes the following interpretation of the problem: </p>
<ol>
<li><code>Xi</code> and <code>Xj</code> are 2 data sets that contain 425 and 4 points, respectively. Each point belongs to <code>R^3</code> (real vector space with dimension 3).</li>
<li>The kernel distance between two data sets is calculated according to the definition given in the article by J.M. Phillips and S. Venkatasubramanian "A Gentle Introduction to the Kernel Distance" that can be found at the following <a href="https://arxiv.org/pdf/1103.1625v2.pdf" rel="nofollow noreferrer">link</a>. The definition is also provided below:</li>
</ol>
<hr>
<p><a href="https://i.stack.imgur.com/4TBRo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4TBRo.png" alt="enter image description here"></a></p>
<hr>
<p>The most straightforward implementation of the algorithm:</p>
<pre><code>% Initialisation.
clear;
clc;
% Construct Xi.
Xi = [randn(425, 1) randn(425, 1) randn(425, 1)];
% Definition of Xj.
Xj = [0.1 0.2 0.3; 0 0 0; -0.1 -0.1 -0.2; 1 -8 4];
% Convert to cell arrays.
Xi = mat2cell(Xi, ones(1, length(Xi(:, 1))), 3);
Xj = mat2cell(Xj, ones(1, length(Xj(:, 1))), 3);
% First, construct the kernel function for the evaluation of individual
% points in Xi and Xj
omega = 150;
a = 2;
kerFunction = @(xi, xj) exp(sum(abs(xi - xj).^a)/(omega^2));
kerDist = 0;
for i = 1 : length(Xj)
for j = 1 : length(Xj)
kerDist = kerDist + kerFunction(Xj{i}, Xj{j});
end
end
for i = 1 : length(Xi)
for j = 1 : length(Xi)
kerDist = kerDist + kerFunction(Xi{i}, Xi{j});
end
end
for i = 1 : length(Xi)
for j = 1 : length(Xj)
kerDist = kerDist - 2*kerFunction(Xi{i}, Xj{j});
end
end
</code></pre>
<hr>
<p>A more efficient implementation of the algorithm is presented below:</p>
<pre><code>clear;
% Define constants.
omega = 150;
a = 2;
% Definition of Xi.
Xi = [randn(425, 1) randn(425, 1) randn(425, 1)];
% Definition of Xj.
Xj = [0.1 0.2 0.3; 0 0 0; -0.1 -0.1 -0.2; 1 -8 4];
% Definition of the characteristics of the data sets.
numPointsXj = length(Xj(:, 1));
numPointsXi = length(Xi(:, 1));
% Define a handle function for the definition of indices for the
% vectorisation of the kernel function.
hdlRepIdxPermutation = @(numPoints, numMatrixRep) ...
repmat( ...
(1 : numPoints : numPoints*(numMatrixRep - 1) + 1)', ...
1, numPoints ...
) + ...
repmat(0 : (numPoints - 1), numMatrixRep, 1);
tic
% Calculate the term that corresponds to K(p, p') in the definition of the
% kernal distance.
repXiRight = repmat(Xi, numPointsXi, 1);
leftIdxPermutationXi = hdlRepIdxPermutation(numPointsXi, numPointsXi);
repXiLeft = repXiRight(leftIdxPermutationXi(:), :);
kerDistComp1 = sum(exp(sum(abs(repXiLeft - repXiRight).^a, 2)/(omega^2)));
% Calculate the term that corresponds to K(q, q') in the definition of the
% kernal distance.
repXjRight = repmat(Xj, numPointsXj, 1);
leftIdxPermutationXj = hdlRepIdxPermutation(numPointsXj, numPointsXj);
repXjLeft = repXjRight(leftIdxPermutationXj(:), :);
kerDistComp2 = sum(exp(sum(abs(repXjLeft - repXjRight).^a, 2)/(omega^2)));
% Calculate the term that corresponds to K(p, q) in the definition of the
% kernal distance.
repXjRight = repmat(Xj, numPointsXi, 1);
repXiLeft = repmat(Xi, numPointsXj, 1);
leftIdxPermutationXi = hdlRepIdxPermutation(numPointsXi, numPointsXj);
repXiLeft = repXiLeft(leftIdxPermutationXi(:), :);
kerDistComp3 = -2*sum(exp(sum(abs(repXiLeft - repXjRight).^a, 2)/(omega^2)));
kerDist = kerDistComp1 + kerDistComp2 + kerDistComp3;
toc
disp(kerDist);
</code></pre> | 2016-05-28 12:11:11.723000+00:00 | 2017-07-14 20:01:11.483000+00:00 | 2017-07-14 20:01:11.483000+00:00 | null | 37,487,340 | <p>I'm trying to implement this kernel function<a href="https://i.stack.imgur.com/Uqzqu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Uqzqu.png" alt="enter image description here"></a></p>
<p>which is also known as radial basis function. Suppose that <code>a = 2</code>, <code>b = 1</code> and <code>σ = 150</code>. </p>
<ul>
<li>Xi is a 425x3 matrix</li>
<li>Xj is a 4x3 matrix</li>
</ul>
<p>I've came up with this code but I'm not sure that is correct. Can you help me?</p>
<pre><code>kS = exp( - (pdist2(Xj,Xi).^2) / (sigma^2) )
</code></pre> | 2016-05-27 15:32:49.573000+00:00 | 2017-07-14 20:01:11.483000+00:00 | 2016-05-27 15:56:20.870000+00:00 | matlab | ['https://arxiv.org/pdf/1103.1625v2.pdf', 'https://i.stack.imgur.com/4TBRo.png'] | 2 |
61,630,425 | <p>Here is a survey of algorithms to generate uniform random integers from random bits.</p>
<ul>
<li>J. Lumbroso's Fast Dice Roller in "<a href="https://arxiv.org/abs/1304.1916" rel="nofollow noreferrer">Optimal Discrete Uniform Generation from Coin Flips, and Applications</a>, 2013. See also the implementation at the end of this answer.</li>
<li><a href="http://mathforum.org/library/drmath/view/65653.html" rel="nofollow noreferrer">The Math Forum</a>, 2004. See also "<a href="https://arxiv.org/abs/1012.4290" rel="nofollow noreferrer">Bit Recycling for Scaling Random Number Generators</a>".</li>
<li>D. Lemire, "<a href="https://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/" rel="nofollow noreferrer">A Fast Alternative to the Modulo Reduction</a>".</li>
<li>M. O'Neill, "<a href="https://www.pcg-random.org/posts/bounded-rands.html" rel="nofollow noreferrer">Efficiently Generating a Number in a Range</a>".</li>
</ul>
<p>Some of these algorithms are "constant-time", others are unbiased, and still others are "optimal" in terms of the number of random bits it uses on average. In the rest of this answer we will assume we have a "true" random generator that can produce unbiased and independent random bits.</p>
<p>In 1976, D. E. Knuth and A. C. Yao showed that any algorithm that produces random integers with a given probability, using only random bits, can be represented as a binary tree, where random bits indicate which way to traverse the tree and each leaf (endpoint) corresponds to an outcome. They also gave lower bounds on the number of bits a given algorithm will need on average for this task. In this case, an <em>optimal</em> algorithm to generate integers in <code>[0, n)</code> uniformly, will need at most <code>log2(n) + 2</code> bits on average. There are many examples of <em>optimal</em> algorithms in this sense, including the Fast Dice Roller and presumably the Math Forum's algorithm. On the other hand, all the random integer algorithms surveyed by O'Neill are not optimal algorithms (since they rely on generating blocks of bits at a time, rather than individual bits).</p>
<p>However, any <em>optimal</em> integer generator that is also <em>unbiased</em> will, in general, run forever in the worst case, as also shown by Knuth and Yao. Going back to the binary tree, each one of the <code>n</code> outcomes labels leaves in the binary tree so that each integer in [0, n) can occur with probability 1/n. But if 1/n has a non-terminating binary expansion (which will be the case if n is not a power of 2), this binary tree will necessarily either—</p>
<ul>
<li>have an "infinite" depth, or</li>
<li>include "rejection" leaves at the end of the tree,</li>
</ul>
<p>and in either case, the algorithm won't run in constant time and will run forever in the worst case. (On the other hand, when <code>n</code> is a power of 2, the optimal binary tree will have a finite depth and no rejection nodes.) The Fast Dice Roller is an example of an algorithm that uses "rejection" events to ensure it's unbiased; see the comment in the code below.</p>
<p>Thus, in general, <strong>a random integer generator can be <em>either</em> unbiased <em>or</em> constant-time (or even neither), but not both</strong>. Notably, there is generally no way to "fix" the worst case of an indefinite running time without introducing bias. For instance, modulo reductions (as well as the "fast alternative" by Lemire) are equivalent to a binary tree in which rejection leaves are replaced with labeled outcomes — but since there are more possible outcomes than rejection leaves, only some of the outcomes can take the place of the rejection leaves, introducing bias. The same kind of binary tree — and the same kind of bias — results if you stop rejecting after a set number of iterations. (However, this bias may be negligible depending on the application. There are also security aspects to random integer generation, which are too complicated to discuss in this answer.)</p>
<h3>Fast Dice Roller Implementation</h3>
<p>The following is JavaScript code that implements the Fast Dice Roller. Note that it uses a rejection event and a loop to ensure it's unbiased.</p>
<pre><code>function randomInt(minInclusive, maxExclusive) {
var maxInclusive = (maxExclusive - minInclusive) - 1
var x = 1
var y = 0
while(true) {
x = x * 2
var randomBit = (Math.random() < 0.5 ? 0 : 1)
y = y * 2 + randomBit
if(x > maxInclusive) {
if (y <= maxInclusive) { return y + minInclusive }
// Rejection
x = x - maxInclusive - 1
y = y - maxInclusive - 1
}
}
}
</code></pre> | 2020-05-06 08:15:19.763000+00:00 | 2020-07-29 02:41:16.353000+00:00 | 2020-07-29 02:41:16.353000+00:00 | null | 26,613,099 | <p>Assuming I can generate random bytes of data, how can I use that to choose an element out of an array of <code>n</code> elements?</p>
<p>If I have 256 elements I can generate 1 byte of entropy (8 bits), and then use that to pick my element simply be converting it to an integer.</p>
<p>If I have 2 elements I can generate 1 byte, discard 7 bits and use the remaining bit to select my element.</p>
<p>But what if I have 3 elements? 1 bit is too few and 2 is too many. How would I randomly select 1 of the 3 elements with equal probability?</p> | 2014-10-28 16:08:04.327000+00:00 | 2020-07-29 02:41:16.353000+00:00 | 2014-10-28 16:16:16.877000+00:00 | algorithm|random | ['https://arxiv.org/abs/1304.1916', 'http://mathforum.org/library/drmath/view/65653.html', 'https://arxiv.org/abs/1012.4290', 'https://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/', 'https://www.pcg-random.org/posts/bounded-rands.html'] | 5 |
55,596,978 | <p>
Just try to plot your data and you will see that functions produce datasets of different complexities.
In the second case classes are almost inseparable, so you need to increase your model complexity.</p>
<p>How to increase your model complexity:</p>
<ul>
<li>More layers</li>
<li>More hidden units</li>
<li>Tweak batch size</li>
<li>Tweak lr, try to use lr-scheduler </li>
<li>Try another optimizer, for example Adam</li>
<li>To avoid overfitting in very deep networks add dropout layers</li>
<li>Take a look at very promising <a href="https://arxiv.org/abs/1706.02515" rel="nofollow noreferrer">Self Normalizing Neural Networks</a></li>
</ul>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as p
DATASHAPE = (2000, 2)
NUM_CLASSES = 3
def sum_mod_label(x):
return np.array([x for x in map(lambda x: x % NUM_CLASSES, map(int, (x[:, 0] + x[:, 1]) * 100))])
def sum_bin_label(x):
def binit(x):
if x < 0.807:
return 0
if x < 1.169:
return 1
return 2
return np.array([x for x in map(lambda x: binit(x), x[:, 0] + x[:, 1])])
data = np.random.random_sample(DATASHAPE)
bin_label = sum_bin_label(data)
mod_label = sum_mod_label(data)
def plot_data(data, label, title):
plt.figure(figsize=(9, 9))
plt.title(title)
plt.scatter(data[..., 0], data[..., 1], c=label)
plt.show()
plot_data(data, bin_label, 'sum_bin_label')
plot_data(data, mod_label, 'sum_mod_label')
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/tuTBX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tuTBX.png" alt="plot1"></a>
<a href="https://i.stack.imgur.com/cOfqh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cOfqh.png" alt="plot2"></a></p> | 2019-04-09 16:05:51.977000+00:00 | 2019-04-11 11:16:57.820000+00:00 | 2019-04-11 11:16:57.820000+00:00 | null | 55,595,898 | <p>I would like to fit a pytorch feed forward network on a crafted dataset with dependency between labels y and two features from the dataset.</p>
<p>Dataset is generated using <code>np.random.random_sample</code> for a distribution between 0 and 1 and label is computed using the two functions below:</p>
<ul>
<li><code>sum_bin_label</code></li>
<li><code>sum_mod_label</code></li>
</ul>
<p>The first function I can see that both training and validation loss of the neural network is decreasing and eventually it is able to approximate the function with close to 100%, what is expected, but for the second function that is using <code>sum</code> and <code>modulo(num_classes)</code> it is unable to make any progress. I have tried multiple learning rates and network architectures but did not manage to fit it.</p>
<p>I am interested to see how that function can be fitted.</p>
<p>Bellow is a simple example that can be pasted directly to a jupyter notebook or any kind of python repl for that matter.</p>
<p>Thanks in advance!</p>
<p>Imports</p>
<pre><code>import torch
import numpy as np
from sklearn.model_selection import train_test_split
import torch.utils.data as utils
DATASHAPE = (2000, 2)
NUM_CLASSES = 3
</code></pre>
<p>Functions and classes used</p>
<pre><code>def sum_mod_label(x):
return np.array([x for x in map(
lambda x: x % NUM_CLASSES, map(int, (x[:, 0] + x[:, 1]) * 100))])
def sum_bin_label(x):
def binit(x):
if x < 0.807:
return 0
if x < 1.169:
return 1
return 2
return np.array(
[x for x in map(lambda x: binit(x), x[:, 0] + x[:, 1])])
class RandomModuloDataset(utils.Dataset):
def __init__(self, shape, label_fn):
self.data = np.random.random_sample(shape)
self.label = label_fn(self.data)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
return self.data[idx, :], self.label[idx]
class FeedForward(torch.nn.Module):
def __init__(self, input_size, num_classes):
super().__init__()
self.input_size = input_size
self.num_classes = num_classes
self.relu = torch.nn.ReLU()
self.softmax = torch.nn.Softmax(dim=-1)
self.fc1 = torch.nn.Linear(
self.input_size, self.input_size)
self.fc2 = torch.nn.Linear(
self.input_size, self.num_classes)
def forward(self, x, **kwargs):
output = self.fc2(self.relu(self.fc1(x.float())))
return self.softmax(output)
def fitit(trainloader, epochs=10):
neurons = DATASHAPE[1]
net = FeedForward(neurons, NUM_CLASSES)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(epochs):
for i, data in enumerate(trainloader, 0):
inputs, labels = data
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
print('[%d] loss: %.3f' %
(epoch + 1, loss.item()))
</code></pre>
<p>Iteration with first function (eventually converges)</p>
<pre><code>sum_bin_tloader = utils.DataLoader(
RandomModuloDataset(DATASHAPE, sum_bin_label))
fitit(sum_bin_tloader, epochs=50)
[1] loss: 1.111
[2] loss: 1.133
[3] loss: 1.212
[4] loss: 1.264
[5] loss: 1.261
[6] loss: 1.199
[7] loss: 1.094
[8] loss: 1.011
[9] loss: 0.958
[10] loss: 0.922
[11] loss: 0.896
[12] loss: 0.876
[13] loss: 0.858
[14] loss: 0.844
[15] loss: 0.831
[16] loss: 0.820
[17] loss: 0.811
[18] loss: 0.803
[19] loss: 0.795
[20] loss: 0.788
[21] loss: 0.782
[22] loss: 0.776
[23] loss: 0.771
[24] loss: 0.766
[25] loss: 0.761
[26] loss: 0.757
[27] loss: 0.753
[28] loss: 0.749
[29] loss: 0.745
[30] loss: 0.741
[31] loss: 0.738
[32] loss: 0.734
[33] loss: 0.731
[34] loss: 0.728
[35] loss: 0.725
[36] loss: 0.722
[37] loss: 0.719
[38] loss: 0.717
[39] loss: 0.714
[40] loss: 0.712
[41] loss: 0.709
[42] loss: 0.707
[43] loss: 0.705
[44] loss: 0.703
[45] loss: 0.701
[46] loss: 0.699
[47] loss: 0.697
[48] loss: 0.695
[49] loss: 0.693
[50] loss: 0.691
</code></pre>
<p>Iteration with second function (does not converge)</p>
<pre><code>sum_mod_tloader = utils.DataLoader(
RandomModuloDataset(DATASHAPE, sum_mod_label))
fitit(sum_mod_tloader, epochs=50)
[1] loss: 1.059
[2] loss: 1.065
[3] loss: 1.079
[4] loss: 1.087
[5] loss: 1.091
[6] loss: 1.092
[7] loss: 1.092
[8] loss: 1.092
[9] loss: 1.092
[10] loss: 1.091
[11] loss: 1.091
[12] loss: 1.091
[13] loss: 1.091
[14] loss: 1.091
[15] loss: 1.090
[16] loss: 1.090
[17] loss: 1.090
[18] loss: 1.090
[19] loss: 1.090
[20] loss: 1.090
[21] loss: 1.090
[22] loss: 1.089
[23] loss: 1.089
[24] loss: 1.089
[25] loss: 1.089
[26] loss: 1.089
[27] loss: 1.089
[28] loss: 1.089
[29] loss: 1.089
[30] loss: 1.089
[31] loss: 1.089
[32] loss: 1.089
[33] loss: 1.089
[34] loss: 1.089
[35] loss: 1.089
[36] loss: 1.089
[37] loss: 1.089
[38] loss: 1.089
[39] loss: 1.089
[40] loss: 1.089
[41] loss: 1.089
[42] loss: 1.089
[43] loss: 1.089
[44] loss: 1.089
[45] loss: 1.089
[46] loss: 1.089
[47] loss: 1.089
[48] loss: 1.089
[49] loss: 1.089
[50] loss: 1.089
</code></pre>
<p>I expect to be able to fit both functions, since NN should be able to find any function y=f(x) describing the dependend variable, but the training is not progressing for sum_mod_label.</p>
<p>Using catboost I was able to get reasonable accuracy (~75% on the sum_mod_label)</p> | 2019-04-09 15:05:31.687000+00:00 | 2019-04-11 11:16:57.820000+00:00 | 2019-04-09 20:05:57.657000+00:00 | python|machine-learning|pytorch | ['https://arxiv.org/abs/1706.02515', 'https://i.stack.imgur.com/tuTBX.png', 'https://i.stack.imgur.com/cOfqh.png'] | 3 |
51,315,938 | <p>Its an interesting proposal; however, i don't think its just that straightforward. To predict potential shop locations, we will have to build a model in place that gets trained on existing data points (I feel GBM is a good fit for such use cases). I did a bit of research and found this paper has some useful information for your use case - <a href="https://arxiv.org/pdf/1609.02839.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1609.02839.pdf</a>. </p>
<p>Wrt Tensorflow, the JS API is relatively new and i dont think it has all tensorflow goodness yet. However, wrt geospatial analysis using Tensorflow, you might find this link interesting - <a href="https://github.com/Qberto/ML_ObjectDetection_CAFO" rel="nofollow noreferrer">https://github.com/Qberto/ML_ObjectDetection_CAFO</a></p> | 2018-07-12 23:54:17.797000+00:00 | 2018-07-12 23:54:17.797000+00:00 | null | null | 51,201,532 | <p>I am currently exploring ML for an application I'm working on and I'd would like to build a geospatial model using tensorflowJS. </p>
<p>As inputs I'd have lat & long for each location, along with other parameters such as type of location or type of business. </p>
<p>I want to build a predictive model that takes those inputs and can predict for example what type of business will eventually open and where they might potentially set shop.</p>
<p>Can anyone point me to the right direction? My main concern is to learn how to build a geospatial model with tensorflow JS with prediction capabilities.</p>
<p>Thanks</p> | 2018-07-06 00:19:55.560000+00:00 | 2018-07-12 23:54:17.797000+00:00 | null | javascript|tensorflow|machine-learning | ['https://arxiv.org/pdf/1609.02839.pdf', 'https://github.com/Qberto/ML_ObjectDetection_CAFO'] | 2 |
64,133,588 | <p>This code snippet is an implementation of the equation on the top of page 5 of the <a href="https://arxiv.org/pdf/1706.03762.pdf" rel="nofollow noreferrer">Attention is all you need</a> paper that introduced the Transformer models in 2017. The computation is illustrated in Figure 2 of the paper:</p>
<p><a href="https://i.stack.imgur.com/BUM0d.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BUM0d.png" alt="enter image description here" /></a></p>
<p>The hidden states get projection into <em>h</em> attention heads which do the scaled dot-product attention in parallel. The projection can be interpreted as extraction of information that is relevant for the head. Each head then does the probabilistic retrieval based on different (learned) criteria.</p> | 2020-09-30 08:19:05.277000+00:00 | 2020-09-30 08:19:05.277000+00:00 | null | null | 64,129,393 | <p>The AttentionQKV layer implemented by Trax is as the following: <a href="https://github.com/google/trax/blob/master/trax/layers/attention.py#L61" rel="nofollow noreferrer">AttentionQKV</a></p>
<pre><code>def AttentionQKV(d_feature, n_heads=1, dropout=0.0, mode='train'):
"""Returns a layer that maps (q, k, v, mask) to (activations, mask).
See `Attention` above for further context/details.
Args:
d_feature: Depth/dimensionality of feature embedding.
n_heads: Number of attention heads.
dropout: Probababilistic rate for internal dropout applied to attention
activations (based on query-key pairs) before dotting them with values.
mode: One of `'train'`, `'eval'`, or `'predict'`.
"""
return cb.Serial(
cb.Parallel(
core.Dense(d_feature),
core.Dense(d_feature),
core.Dense(d_feature),
),
PureAttention( # pylint: disable=no-value-for-parameter
n_heads=n_heads, dropout=dropout, mode=mode),
core.Dense(d_feature),
)
</code></pre>
<p>In particular, what is the purpose of the three parallel dense layers? The input to this layer is q, k, v, mask. Why the q, k, v are put through a dense layer?</p> | 2020-09-30 00:36:05.433000+00:00 | 2020-09-30 08:19:05.277000+00:00 | null | attention-model|trax | ['https://arxiv.org/pdf/1706.03762.pdf', 'https://i.stack.imgur.com/BUM0d.png'] | 2 |
71,209,966 | <blockquote>
<p>The weights learned by network take very small values. What is the reasonable explanation for this? How to interpret this? all the weights are taking up zero value?</p>
</blockquote>
<p>Not all weights are zero, but many are. One reason is regularization (in combination with a large, i.e. wide layers, network) Regularization makes weights small (both L1 and L2). If your network is large, most weights are not needed, i.e., they can be set to zero and the model still performs well.</p>
<blockquote>
<p>How to interpret the weight histograms and distributions in Tensorflow? Any good resource for it?</p>
</blockquote>
<p>I am not so sure about weight distributions. There is some work that analysis them, but I am not aware of a general interpretation, e.g., for CNNs it is known that center weights of a filter/feature usually have larger magnitude than those in corners, see [Locality-Promoting Representation Learning, 2021, ICPR, https://arxiv.org/abs/1905.10661]
For CNNs you can also visualize weights directly, if you have large filters. For example, for (simpl)e networks you can see that weights first converge towards some kind of class average before overfitting starts. This is shown in Figure 2 of [The learning phases in NN: From Fitting the Majority to Fitting a Few, 2022, http://arxiv.org/abs/2202.08299]
Rather than going for weights, you can also look at what samples trigger the strongest activations for specific features. If you don't want to look at single features, there is also the possibility to visualize what the network actually remembers on the input, e.g., see [Explaining Neural Networks by Decoding Layer Activations, https://arxiv.org/abs/2005.13630].
These are just a few examples (Disclaimer I authored these works) - there is thousands of other works on explainability out there.</p> | 2022-02-21 16:43:15.587000+00:00 | 2022-02-21 16:43:15.587000+00:00 | null | null | 47,745,313 | <p>I have designed a 3 layer neural network whose inputs are the concatenated features from a CNN and RNN. The weights learned by network take very small values. What is the reasonable explanation for this? and how to interpret the weight histograms and distributions in Tensorflow? Any good resource for it?</p>
<p>This is the weight distribution of the first hidden layer of a 3 layer neural network visualized using tensorboard. How to interpret this? all the weights are taking up zero value?</p>
<p><img src="https://i.stack.imgur.com/5Xaqa.png" alt=""></p>
<p>This is the weight distribution of the second hidden layer of a 3 layer neural:</p>
<p><img src="https://i.stack.imgur.com/Ux9Tg.png" alt=""></p> | 2017-12-11 01:27:21.053000+00:00 | 2022-02-21 16:43:15.587000+00:00 | 2017-12-11 11:37:25.023000+00:00 | machine-learning|tensorflow|neural-network|deep-learning | [] | 0 |
47,752,092 | <blockquote>
<p>how to interpret the weight histograms and distributions in Tensorflow?</p>
</blockquote>
<p>Well, you probably didn't realize it, but you have just asked the 1 million dollar question in ML & AI...</p>
<p><em>Model interpretability</em> is a hyper-active and hyper-hot area of current research (think of holy grail, or something), which has been brought forward lately not least due to the (often tremendous) success of deep learning models in various tasks; these models are currently only black boxes, and we naturally feel uncomfortable about it...</p>
<blockquote>
<p>Any good resource for it?</p>
</blockquote>
<p>Probably not exactly the kind of resources you were thinking of, and we are well off a SO-appropriate topic here, but since you asked...:</p>
<ul>
<li><p>A recent (July 2017) article in Science provides a nice overview of the current status & research: <a href="http://www.sciencemag.org/news/2017/07/how-ai-detectives-are-cracking-open-black-box-deep-learning" rel="nofollow noreferrer">How AI detectives are cracking open the black box of deep learning</a> (no in-text links, but googling names & terms will pay off)</p></li>
<li><p>DARPA itself is currently running a program on <a href="https://www.darpa.mil/program/explainable-artificial-intelligence" rel="nofollow noreferrer">Explainable Artificial Intelligence (XAI)</a></p></li>
<li><p>There was a workshop in NIPS 2016 on <a href="http://nuit-blanche.blogspot.gr/2016/12/nips2016-interpretable-machine-learning.html" rel="nofollow noreferrer">Interpretable Machine Learning for Complex Systems</a> </p></li>
</ul>
<p>On a more practical level:</p>
<ul>
<li><p>The Layer-wise Relevance Propagation (LRP) toolbox for neural networks (<a href="http://www.jmlr.org/papers/v17/15-618.html" rel="nofollow noreferrer">paper</a>, <a href="http://www.explain-ai.org/" rel="nofollow noreferrer">project page</a>, <a href="https://github.com/sebastian-lapuschkin/lrp_toolbox" rel="nofollow noreferrer">code</a>, <a href="https://github.com/VigneshSrinivasan10/interprettensor" rel="nofollow noreferrer">TF Slim wrapper</a>)</p></li>
<li><p>FairML: Auditing Black-Box Predictive Models, by Fast Forward Labs (<a href="http://blog.fastforwardlabs.com/2017/03/09/fairml-auditing-black-box-predictive-models.html" rel="nofollow noreferrer">blog post</a>, <a href="https://arxiv.org/abs/1611.04967" rel="nofollow noreferrer">paper</a>, <a href="https://github.com/adebayoj/fairml" rel="nofollow noreferrer">code</a>)</p></li>
<li><p>A very recent (November 2017) paper by Geoff Hinton, <a href="https://arxiv.org/abs/1711.09784" rel="nofollow noreferrer">Distilling a Neural Network Into a Soft Decision Tree</a>, with an independent <a href="https://github.com/kimhc6028/soft-decision-tree" rel="nofollow noreferrer">PyTorch implementation</a></p></li>
<li><p>SHAP: A Unified Approach to Interpreting Model Predictions (<a href="https://arxiv.org/abs/1705.07874" rel="nofollow noreferrer">paper</a>, authors' <a href="https://github.com/slundberg/shap" rel="nofollow noreferrer">code</a>)</p></li>
</ul>
<p>These should be enough for starters, and to give you a general idea of the subject about which you asked...</p>
<p><strong>UPDATE</strong> (Oct 2018): I have put up a much more detailed list of practical resources in my answer to the question <a href="https://stackoverflow.com/questions/52391871/predictive-analytics-why-factor/52392344#52392344">Predictive Analytics - “Why” factor?</a></p> | 2017-12-11 11:36:06.883000+00:00 | 2018-10-15 15:21:20.103000+00:00 | 2018-10-15 15:21:20.103000+00:00 | null | 47,745,313 | <p>I have designed a 3 layer neural network whose inputs are the concatenated features from a CNN and RNN. The weights learned by network take very small values. What is the reasonable explanation for this? and how to interpret the weight histograms and distributions in Tensorflow? Any good resource for it?</p>
<p>This is the weight distribution of the first hidden layer of a 3 layer neural network visualized using tensorboard. How to interpret this? all the weights are taking up zero value?</p>
<p><img src="https://i.stack.imgur.com/5Xaqa.png" alt=""></p>
<p>This is the weight distribution of the second hidden layer of a 3 layer neural:</p>
<p><img src="https://i.stack.imgur.com/Ux9Tg.png" alt=""></p> | 2017-12-11 01:27:21.053000+00:00 | 2022-02-21 16:43:15.587000+00:00 | 2017-12-11 11:37:25.023000+00:00 | machine-learning|tensorflow|neural-network|deep-learning | ['http://www.sciencemag.org/news/2017/07/how-ai-detectives-are-cracking-open-black-box-deep-learning', 'https://www.darpa.mil/program/explainable-artificial-intelligence', 'http://nuit-blanche.blogspot.gr/2016/12/nips2016-interpretable-machine-learning.html', 'http://www.jmlr.org/papers/v17/15-618.html', 'http://www.explain-ai.org/', 'https://github.com/sebastian-lapuschkin/lrp_toolbox', 'https://github.com/VigneshSrinivasan10/interprettensor', 'http://blog.fastforwardlabs.com/2017/03/09/fairml-auditing-black-box-predictive-models.html', 'https://arxiv.org/abs/1611.04967', 'https://github.com/adebayoj/fairml', 'https://arxiv.org/abs/1711.09784', 'https://github.com/kimhc6028/soft-decision-tree', 'https://arxiv.org/abs/1705.07874', 'https://github.com/slundberg/shap', 'https://stackoverflow.com/questions/52391871/predictive-analytics-why-factor/52392344#52392344'] | 15 |
39,765,786 | <p>You're using the Adam optimizer (<a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">https://arxiv.org/abs/1412.6980</a>) for optimization. Adam has two state variables to store statistics about the gradients which are the same size as the parameters (Algorithm 1), which is your two additional variables per parameter variable. The optimizer itself has a few hyperparameters, among them β<sub>1</sub> and β<sub>2</sub>, which I guess are in your case stored as variables.</p> | 2016-09-29 08:56:32.273000+00:00 | 2018-07-02 18:45:58.640000+00:00 | 2018-07-02 18:45:58.640000+00:00 | null | 39,757,985 | <p>I created a <em>convolutional neural network</em> with three convolutional layers and two fully connected layers. I used <code>tf.train.saver()</code> to save the variables.
When I use <code>inspect_checkpoint.py</code> to check the variables saved in the checkpoint file. Why are there two additional variables saved for each layer, like <code>Adam_1</code> and <code>Adam</code>? Also, what are <code>beta1_power</code> and <code>beta2_power</code>? </p>
<pre><code>conv_layer1_b (DT_FLOAT) [32]
conv_layer1_w (DT_FLOAT) [1,16,1,32]
conv_layer1_b/Adam (DT_FLOAT) [32]
conv_layer1_w/Adam (DT_FLOAT) [1,16,1,32]
conv_layer1_w/Adam_1 (DT_FLOAT) [1,16,1,32]
conv_layer1_b/Adam_1 (DT_FLOAT) [32]
conv_layer3_w/Adam (DT_FLOAT) [1,16,64,64]
conv_layer3_w (DT_FLOAT) [1,16,64,64]
conv_layer3_b/Adam_1 (DT_FLOAT) [64]
conv_layer3_b (DT_FLOAT) [64]
conv_layer3_b/Adam (DT_FLOAT) [64]
conv_layer3_w/Adam_1 (DT_FLOAT) [1,16,64,64]
conv_layer2_w/Adam_1 (DT_FLOAT) [1,16,32,64]
conv_layer2_w/Adam (DT_FLOAT) [1,16,32,64]
conv_layer2_w (DT_FLOAT) [1,16,32,64]
conv_layer2_b/Adam_1 (DT_FLOAT) [64]
conv_layer2_b (DT_FLOAT) [64]
conv_layer2_b/Adam (DT_FLOAT) [64]
beta1_power (DT_FLOAT) []
beta2_power (DT_FLOAT) []
NN1_w (DT_FLOAT) [2432,512]
NN1_b (DT_FLOAT) [512]
NN1_w/Adam_1 (DT_FLOAT) [2432,512]
NN1_b/Adam_1 (DT_FLOAT) [512]
NN1_w/Adam (DT_FLOAT) [2432,512]
NN1_b/Adam (DT_FLOAT) [512]
NN2_w (DT_FLOAT) [512,2]
NN2_b (DT_FLOAT) [2]
NN2_w/Adam_1 (DT_FLOAT) [512,2]
NN2_b/Adam_1 (DT_FLOAT) [2]
NN2_w/Adam (DT_FLOAT) [512,2]
NN2_b/Adam (DT_FLOAT) [2]
</code></pre> | 2016-09-28 21:29:35.193000+00:00 | 2018-07-02 18:55:49.410000+00:00 | 2018-07-02 18:55:49.410000+00:00 | tensorflow|optimization|deep-learning | ['https://arxiv.org/abs/1412.6980'] | 1 |
49,977,348 | <p>You can not really plot the Adam learning rate like this, since Adam is a momentum optimizer. The applied gradient for each steps depends on a moving average of the mean and standard deviation of the gradients of previous steps.</p>
<p>In general there is no guarantee for the learning to converge, the raw learning rate <code>alpha</code> itself is not directly changed by Adams. It is only rescaled using the momentums of the gradient. The learning only converges well if mean and standard deviation of the gradient decrease over time when reaching the global minimum, which is often the case for simple neural networks. </p>
<p>For highly stochastic problems however one might still need to implement some form of learning rate decay to suppress 'oscillations' around the optimal parameters, or at least make them smaller to make sure there really is convergence.</p>
<p>If you really want to understand how exactly this works you might want to read the Adam <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">paper</a>, it is much simpler than it seems on first sight.</p> | 2018-04-23 09:15:13.787000+00:00 | 2018-04-23 09:21:15.727000+00:00 | 2018-04-23 09:21:15.727000+00:00 | null | 49,969,957 | <p>Okey I have been reading some of the posts regarding AdamOptimizer in tensorflow. I think there is some confusion around, at least among beginners in NNs like me.</p>
<p>If I understood correctly, tf.train.AdamOptimizer keeps a so-called "adaptative learning rate". I thought that this learning rate would grow smaller as time increases.</p>
<p>However, when I plot the function by which the learning rate is scaled, taken from the <a href="https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/train/AdamOptimizer" rel="nofollow noreferrer">docs</a>,</p>
<pre><code>t <- t + 1
lr_t <- learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t)
</code></pre>
<p>this is what I get:</p>
<pre><code>t = np.arange(200)
result = np.sqrt(1-0.999**t)/(1-0.9**t)
plt.plot(result)
plt.show
</code></pre>
<p><a href="https://i.stack.imgur.com/G2BEK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G2BEK.png" alt="enter image description here"></a></p>
<p>So, for t = 1, the value for the user-selected learning rate is multiplied by 0.3 Then it decreases quite fast until 0.15 of its value, and then increases with time, slowly approaching the limit = user-selected learning rate. </p>
<p>Isn't it a bit weird? I guess somewhere I am wrong, but I would've expected the learning rate to start at a higher value and then progressively decreasing towards smaller values. </p> | 2018-04-22 19:35:02.703000+00:00 | 2018-04-23 18:45:38.613000+00:00 | null | python|tensorflow|optimization|neural-network | ['https://arxiv.org/abs/1412.6980'] | 1 |
65,341,510 | <p>Basically, the COCO dataset was described in a paper before its release (you can find it <a href="https://arxiv.org/abs/1405.0312" rel="nofollow noreferrer">here</a>). At this point, the authors gave a list of the 91 types of objects that would be in the dataset.</p>
<p>But when the 2014 and 2017 datasets sere released, it turned out that you could find only 80 of these objects in the annotations.</p>
<p>The list you have is the original list of objects (as described in the paper) but with every object that does not appear in the 2014 and 2017 replaced by the empty string <code>""</code>.</p>
<p>My guess is that the sole purpose of keeping these "phantom" objects is to keep consistency with object ids that may have been fixed someday in the past.</p>
<p>If you want to learn more about it, you can look at <a href="https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/" rel="nofollow noreferrer">this blog entry</a>.</p> | 2020-12-17 13:18:27.717000+00:00 | 2020-12-17 13:18:27.717000+00:00 | null | null | 65,340,780 | <p>I have been checking out this <a href="https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch" rel="nofollow noreferrer">detr repository</a> and the total number of classes are 100, but 10 of these are empty string as shown <a href="https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch/blob/master/projects/coco.yml" rel="nofollow noreferrer">here</a>.<br />
Is there any particular reason behind this?</p> | 2020-12-17 12:30:47.870000+00:00 | 2020-12-17 13:18:27.717000+00:00 | null | deep-learning|computer-vision|pytorch|object-detection | ['https://arxiv.org/abs/1405.0312', 'https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/'] | 2 |
65,332,587 | <p>Perhaps to get an overall high level picture you can read the <a href="https://arxiv.org/pdf/1801.10228.pdf" rel="nofollow noreferrer">paper</a>.</p>
<blockquote>
<p>Nothing is stated regarding cryptographic algorithm in the official
github documentation,</p>
</blockquote>
<ul>
<li>Fabric uses TLS 1.2/1.3 to secure and authenticate nodes in the
network level. Both ECDSA and RSA TLS certificates are supported.</li>
<li>Fabric uses ECDSA for all signatures of clients and nodes, with the NIST curve P-256.</li>
<li>Fabric authenticates clients and nodes with x509 based PKI, unless you configure it to use the exotic bleeding edge <a href="https://hyperledger-fabric.readthedocs.io/en/latest/idemix.html" rel="nofollow noreferrer">identity mixer</a>.</li>
<li>Fabric uses only SHA256 as a collision resistant hash function.</li>
<li>Fabric supports <a href="https://hyperledger-fabric.readthedocs.io/en/latest/hsm.html" rel="nofollow noreferrer">HSM</a> based signing.</li>
<li>Blocks made up from headers, metadata, and transactions, where most fields are <a href="https://developers.google.com/protocol-buffers" rel="nofollow noreferrer">protobuf</a> encoded but a small part is ASN1 encoded.</li>
</ul>
<blockquote>
<p>as far as i know, and the only tiny bit of crypto that i found was on
actual code from github, the internal part.</p>
</blockquote>
<p>Take a look at <a href="https://github.com/hyperledger/fabric/tree/master/bccsp" rel="nofollow noreferrer">BCCSP</a> (BlockChain Crypto Service Provider) ;-)</p>
<blockquote>
<p>and what kind of algorithm are used, is harder.</p>
</blockquote>
<p>The official Fabric currently only supports <strong>crash fault tolerant consensus algorithm</strong> for its blockchain, so it assumes ordering nodes are not malicious, and specifically, do not fork the blockchain.</p>
<p>There are some unofficial efforts to build a <strong>Byzantine Fault Tolerant</strong> fork of Fabric such as <a href="https://github.com/SmartBFT-Go/" rel="nofollow noreferrer">this</a>.</p> | 2020-12-16 23:25:46.700000+00:00 | 2020-12-16 23:31:27.707000+00:00 | 2020-12-16 23:31:27.707000+00:00 | null | 65,326,298 | <p>for a project of mine i wanted to know exactly how hyperledger fabric works exactly.</p>
<p>There is no shortage of documentation on how you can use this technology, I am really glad for developpers around there but i'm not a developper, I have to understand exactly what kind of algorithm are used in order to justify if we consider this technology safe or not. <em>I obviously already know that it's probably safe, but it's not really enough for me to know that.</em></p>
<p>And while doc regarding usages and all are easy to find, finding doc on how it works in the background, and what kind of algorithm are used, is harder.</p>
<p>Nothing is stated regarding cryptographic algorithm in the <a href="https://hyperledger-fabric.readthedocs.io/en/release-2.3/whatis.html" rel="nofollow noreferrer">official github documentation</a>, as far as i know, and the only tiny bit of crypto that i found was on actual code from github, <a href="https://github.com/hyperledger/fabric/tree/master/internal" rel="nofollow noreferrer">the internal part</a>. I can more or less search what I want there, but I'm really in need of a technical documentation that i can quote and I just don't find it.</p>
<p>If you have some links to a technical doc, please let me know, in short what i search :</p>
<ul>
<li><p>Details on blockchain storage</p>
</li>
<li><p>Cryptography behind hyperledger fabric, what kind of hash fonction it use ?</p>
</li>
<li><p>Exactly what forms have data in the blockchain, what format ?</p>
</li>
</ul>
<p>I'm here if I weren't clear about what i need. Not a native english speaker so if I wrote some mistake I hope it was bearable</p>
<p>Edit : now that i have more or less my solution, i'll share one more helpful <a href="https://kctheservant.medium.com/tls-in-hyperledger-fabric-b38fccb8614c" rel="nofollow noreferrer">link</a> related to the documentation that i found, who talk about how tls is used with fabric.</p> | 2020-12-16 15:30:45.863000+00:00 | 2020-12-17 06:34:53.413000+00:00 | 2020-12-17 06:34:53.413000+00:00 | cryptography|hyperledger-fabric|hyperledger|code-documentation | ['https://arxiv.org/pdf/1801.10228.pdf', 'https://hyperledger-fabric.readthedocs.io/en/latest/idemix.html', 'https://hyperledger-fabric.readthedocs.io/en/latest/hsm.html', 'https://developers.google.com/protocol-buffers', 'https://github.com/hyperledger/fabric/tree/master/bccsp', 'https://github.com/SmartBFT-Go/'] | 6 |
62,096,849 | <p>It isn't exactly clear what you are asking, it seems that you are unsure of what you should be modelling, which is a statistical question and may be more appropriate on Cross Validated, but it also seems that you are unsure about R (or rather the package <code>lme4</code>) syntax.</p>
<p>Check out this <a href="https://arxiv.org/pdf/1406.5823.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1406.5823.pdf</a> comprehensive guide to using <code>lme4</code>, specifically page 6 will be helpful for understanding random effects syntax. From that document: </p>
<p><em>"Each random-effects term is of the form (expr|factor). The expression expr is evaluated as
a linear model formula, producing a model matrix following the same rules used in standard
R modelling functions (e.g., lm or glm). The expression factor is evaluated as an R factor"</em></p>
<p>When you have RE terms specified like this: <code>(1|participant) + (1|pre_post)</code> that means that the RE are crossed (see PeerJ paper below), whereas <code>pre_post + (1+ pre_post|participant)</code> is a correlated intercept and slope (see page 6 in link above).</p>
<p>I agree with @sjp that you don't want to use a random effect for something with only two levels. This paper, which is a wonderful introduction to mixed models, suggests that you should have at least 5 levels: <a href="https://peerj.com/articles/4794/" rel="nofollow noreferrer">https://peerj.com/articles/4794/</a> </p>
<p>An alternative to @sjp suggestion of log transforming data that might work for trial time data is to use the <code>glmer()</code> function with a <code>family=gamma</code> argument, but either way it is good to inspect residual plots for model fits <a href="http://www.sthda.com/english/articles/39-regression-model-diagnostics/161-linear-regression-assumptions-and-diagnostics-in-r-essentials/" rel="nofollow noreferrer">http://www.sthda.com/english/articles/39-regression-model-diagnostics/161-linear-regression-assumptions-and-diagnostics-in-r-essentials/</a>. </p>
<p>In the case that you don't actually need a random effects term, then the base R functions <code>lm</code> and <code>glm</code> will replace <code>lmer</code> and <code>glmer</code> respectively.</p> | 2020-05-30 01:45:30+00:00 | 2020-05-30 01:45:30+00:00 | null | null | 62,004,944 | <p>I am new to mixed model analysis. Can somebody help me to get things clear? </p>
<p>I have the follwoing repeated measurement design:
<em>pre test - intervention - post test.</em> </p>
<p>Varaibles:
<strong>Go_rt</strong> - reaction time.
<strong>pre_post</strong> - categorical variable (pre-test;post-test)
<strong>expectation</strong> - participants expectations. </p>
<p>I have the follwoing R code where I want apply mixed model to evalute wether reaction time is statistically different (pre-test vs post test). Plus I want to whether there is interction with effect with participants expectations. </p>
<p><em>mod <- lmer(Go_rt ~ pre_post +expectations + pre_post:expectations + (1|participant), data=data,
REML=FALSE)</em></p>
<p>What I doubt about is whether the pre_post variable has to be specified in the random part. So the code will look like this: </p>
<p><em>mod1 <- lmer(Go_rt ~ pre_post +expectations + pre_post:expectations + (1+ pre_post|participant), data=data,
REML=FALSE)</em></p>
<p>And what will it change if I change it like this? </p>
<p><em>mod2 <- lmer(Go_rt ~ pre_post +expectations + pre_post:expectations + (1|participant) + (1|pre_post), data=data,
REML=FALSE)</em></p>
<p>Actually the mod2 give me significant results for interaciton effect whereas mod & mod1 does not. </p> | 2020-05-25 14:51:35.010000+00:00 | 2020-06-09 15:59:49.407000+00:00 | 2020-05-25 14:53:56.870000+00:00 | r|lme4|mixed-models | ['https://arxiv.org/pdf/1406.5823.pdf', 'https://peerj.com/articles/4794/', 'http://www.sthda.com/english/articles/39-regression-model-diagnostics/161-linear-regression-assumptions-and-diagnostics-in-r-essentials/'] | 3 |
62,015,816 | <p>If I am understanding your question correctly, you do not want to have a random intercept for your treatment (pre/post). This is not some noise that you wish to account for, but your experimental question, so <code>mod2</code> is out. Also, you really shouldn't have a random effect that only has two levels (<a href="https://dynamicecology.wordpress.com/2015/11/04/is-it-a-fixed-or-random-effect/" rel="nofollow noreferrer">https://dynamicecology.wordpress.com/2015/11/04/is-it-a-fixed-or-random-effect/</a>).</p>
<p>Pre-test should be the reference level of the factor, so that you can easily interpret what effect your treatment had on reaction time. You can change that by modifying the following code to your data:</p>
<pre><code>data$pre_post = relevel(data$pre_post, ref="pre")
</code></pre>
<p>For reaction time, it's also convention in many disciplines to model log reaction time, this can easily be done by putting it in the model formula, which I have done below. If this is not the case in your field, feel free to disregard this.</p>
<p>It's also possible that <code>expectations</code> affects your participants differently, so you could also add a random slope for <code>expectations</code> by participant. First, I would test if the first random slope, <code>pre_post</code>, leads to a significantly better model fit. I would do that with the following code. Note that REML has been changed to true because you are comparing random effects now.</p>
<pre><code>mod1 <- lmer(log(Go_rt) ~ pre_post + expectations + pre_post:expectations + (1|participant), data=data, REML=TRUE)
mod1.1 <- lmer(log(Go_rt) ~ pre_post + expectations + (1 + pre_post|participant), data=data, REML=TRUE)
anova(mod1, mod1.1)
</code></pre>
<p>If it does lead to a better model, I would leave it in. Then I would test whether or not a random slope for <code>expectations</code> improves the model.</p>
<pre><code>mod1.2 <- lmer(log(Go_rt) ~ pre_post + expectations + (1 + pre_post + expectations|participant), data=data, REML=TRUE)
anova(mod1.1, mod1.2)
</code></pre>
<p>After I had found the best random effects structure, I would look at the fixed-effects, beginning with the interaction, and see if it was significant in the likelihood ratio test, again using the <code>anova()</code> function.</p>
<p>I hope this helps. There are other ways of looking at random effects, and seeing whether or not they are warranted or not using the <code>rePCA()</code> function included in <code>lme4</code>. It is probably a good idea to look into this paper if you are fitting mixed models: <a href="https://arxiv.org/pdf/1506.04967.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1506.04967.pdf</a> </p> | 2020-05-26 06:18:49.900000+00:00 | 2020-05-26 06:32:33.353000+00:00 | 2020-05-26 06:32:33.353000+00:00 | null | 62,004,944 | <p>I am new to mixed model analysis. Can somebody help me to get things clear? </p>
<p>I have the follwoing repeated measurement design:
<em>pre test - intervention - post test.</em> </p>
<p>Varaibles:
<strong>Go_rt</strong> - reaction time.
<strong>pre_post</strong> - categorical variable (pre-test;post-test)
<strong>expectation</strong> - participants expectations. </p>
<p>I have the follwoing R code where I want apply mixed model to evalute wether reaction time is statistically different (pre-test vs post test). Plus I want to whether there is interction with effect with participants expectations. </p>
<p><em>mod <- lmer(Go_rt ~ pre_post +expectations + pre_post:expectations + (1|participant), data=data,
REML=FALSE)</em></p>
<p>What I doubt about is whether the pre_post variable has to be specified in the random part. So the code will look like this: </p>
<p><em>mod1 <- lmer(Go_rt ~ pre_post +expectations + pre_post:expectations + (1+ pre_post|participant), data=data,
REML=FALSE)</em></p>
<p>And what will it change if I change it like this? </p>
<p><em>mod2 <- lmer(Go_rt ~ pre_post +expectations + pre_post:expectations + (1|participant) + (1|pre_post), data=data,
REML=FALSE)</em></p>
<p>Actually the mod2 give me significant results for interaciton effect whereas mod & mod1 does not. </p> | 2020-05-25 14:51:35.010000+00:00 | 2020-06-09 15:59:49.407000+00:00 | 2020-05-25 14:53:56.870000+00:00 | r|lme4|mixed-models | ['https://dynamicecology.wordpress.com/2015/11/04/is-it-a-fixed-or-random-effect/', 'https://arxiv.org/pdf/1506.04967.pdf'] | 2 |
2,991,587 | <p>As the other answers claim, lookarounds don't add any extra power to regular expressions.</p>
<p>I think we can show this using the following:</p>
<p><a href="http://arxiv.org/PS_cache/arxiv/pdf/0907/0907.5127v1.pdf" rel="noreferrer">One Pebble 2-NFA</a> (see the Introduction section of the paper which refers to it).</p>
<p>The 1-pebble 2NFA does not deal with nested lookaheads, but, we can use a variant of multi-pebble 2NFAs (see section below).</p>
<p><strong>Introduction</strong></p>
<p>A 2-NFA is a non deterministic finite automaton which has the ability to move either left or right on it's input.</p>
<p>A one pebble machine is where the machine can place a pebble on the input tape (i.e. mark a specific input symbol with a pebble) and do possibly different transitions based on whether there is a pebble at the current input position or not.</p>
<p>It is known the One Pebble 2-NFA has the same power as a regular DFA.</p>
<p><strong>Non-nested Lookaheads</strong></p>
<p>The basic idea is as follows:</p>
<p>The 2NFA allows us to backtrack (or 'front track') by moving forward or backward in the input tape. So for a lookahead we can do the match for the lookahead regular expression and then backtrack what we have consumed, in matching the lookahead expression. In order to know exactly when to stop backtracking, we use the pebble! We drop the pebble before we enter the dfa for the lookahead to mark the spot where the backtracking needs to stop.</p>
<p>Thus at the end of running our string through the pebble 2NFA, we know whether we matched the lookahead expression or not and the input left (i.e. what is left to be consumed) is exactly what is required to match the remaining.</p>
<p>So for a lookahead of the form u(?=v)w</p>
<p>We have the DFAs for u, v and w.</p>
<p>From the accepting state (yes, we can assume there is only one) of DFA for u, we make an e-transition to the start state of v, marking the input with a pebble.</p>
<p>From an accepting state for v, we e-transtion to a state which keeps moving the input left, till it finds a pebble, and then transitions to start state of w.</p>
<p>From a rejecting state of v, we e-transition to a state which keeps moving left until it finds the pebble, and transtions to the accepting state of u (i.e where we left off).</p>
<p>The proof used for regular NFAs to show r1 | r2, or r* etc, carry over for these one pebble 2nfas. See <a href="http://www.coli.uni-saarland.de/projects/milca/courses/coal/html/node41.html#regularlanguages.sec.regexptofsa" rel="noreferrer">http://www.coli.uni-saarland.de/projects/milca/courses/coal/html/node41.html#regularlanguages.sec.regexptofsa</a> for more info on how the component machines are put together to give the bigger machine for the r* expression etc.</p>
<p>The reason why the above proofs for r* etc work is that the backtracking ensures that the input pointer is always at the right spot, when we enter the component nfas for repetition. Also, if a pebble is in use, then it is being processed by one of the lookahead component machines. Since there are no transitions from lookahead machine to lookahead machine without completely backtracking and getting back the pebble, a one pebble machine is all that is needed.</p>
<p>For eg consider ([^a] | a(?=...b))*</p>
<p>and the string abbb.</p>
<p>We have abbb which goes through the peb2nfa for a(?=...b), at the end of which we are at the state: (bbb, matched) (i.e in input bbb is remaining, and it has matched 'a' followed by '..b'). Now because of the *, we go back to the beginning (see the construction in the link above), and enter the dfa for [^a]. Match b, go back to beginning, enter [^a] again two times, and then accept.</p>
<p><strong>Dealing with Nested Lookaheads</strong></p>
<p>To handle nested lookaheads we can use a restricted version of k-pebble 2NFA as defined here: <a href="http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.3984" rel="noreferrer">Complexity Results for Two-Way and Multi-Pebble Automata and their Logics</a> (see Definition 4.1 and Theorem 4.2).</p>
<p>In general, 2 pebble automata can accept non-regular sets, but with the following restrictions, k-pebble automata can be shown to be regular (Theorem 4.2 in above paper).</p>
<p>If the pebbles are P_1, P_2, ..., P_K</p>
<ul>
<li><p>P_{i+1} may not be placed unless P_i is already on the tape and P_{i} may not be picked up unless P_{i+1} is not on the tape. Basically the pebbles need to be used in a LIFO fashion.</p></li>
<li><p>Between the time P_{i+1} is placed and the time that either P_{i} is picked up or P_{i+2} is placed, the automaton can traverse only the subword located between the current location of P_{i} and the end of the input word that lies in the direction of P_{i+1}. Moreover, in this sub-word, the automaton can act only as a 1-pebble automaton with Pebble P_{i+1}. In particular it is not allowed to lift up, place or even sense the presence of another pebble.</p></li>
</ul>
<p>So if v is a nested lookahead expression of depth k, then (?=v) is a nested lookahead expression of depth k+1. When we enter a lookahead machine within, we know exactly how many pebbles have to have been placed so far and so can exactly determine which pebble to place and when we exit that machine, we know which pebble to lift. All machines at depth t are entered by placing pebble t and exited (i.e. we return to processing of a depth t-1 machine) by removing pebble t. Any run of the complete machine looks like a recursive dfs call of a tree and the above two restrictions of the multi-pebble machine can be catered to.</p>
<p>Now when you combine expressions, for rr1, since you concat, the pebble numbers of r1 must be incremented by the depth of r. For r* and r|r1 the pebble numbering remains the same.</p>
<p>Thus any expression with lookaheads can be converted to an equivalent multi-pebble machine with the above restrictions in pebble placement and so is regular.</p>
<p><strong>Conclusion</strong></p>
<p>This basically addresses the drawback in Francis's original proof: being able to prevent the lookahead expressions from consuming anything which are required for future matches.</p>
<p>Since Lookbehinds are just finite string (not really regexs) we can deal with them first, and then deal with the lookaheads.</p>
<p>Sorry for the incomplete writeup, but a complete proof would involve drawing a lot of figures.</p>
<p>It looks right to me, but I will be glad to know of any mistakes (which I seem to be fond of :-)).</p> | 2010-06-07 17:11:56.897000+00:00 | 2010-06-09 12:51:41.690000+00:00 | 2010-06-09 12:51:41.690000+00:00 | null | 2,974,210 | <p>There are some features in modern regex engines which allow you to match languages that couldn't be matched without that feature. For example the following regex using back references matches the language of all strings that consist of a word that repeats itself: <code>(.+)\1</code>. This language is not regular and can't be matched by a regex that does not use back references.</p>
<p>Does lookaround also affect which languages can be matched by a regular expression? I.e. are there any languages that can be matched using lookaround that couldn't be matched otherwise? If so, is this true for all flavors of lookaround (negative or positive lookahead or lookbehind) or just for some of them?</p> | 2010-06-04 12:44:20.373000+00:00 | 2012-03-01 07:53:09.703000+00:00 | 2012-03-01 07:53:09.703000+00:00 | regex|lookbehind|lookahead|lookaround | ['http://arxiv.org/PS_cache/arxiv/pdf/0907/0907.5127v1.pdf', 'http://www.coli.uni-saarland.de/projects/milca/courses/coal/html/node41.html#regularlanguages.sec.regexptofsa', 'http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.56.3984'] | 3 |
28,276,278 | <p>In which way are the results different?</p>
<p>What do you mean by "confidential parameters"?</p>
<p>3 interesting papers that also deal with the interpretation of 0-1 data in the context of recommender systems (I am a co-author on one of those):</p>
<ol>
<li>Hu et al.: <a href="http://www.hpl.hp.com/techreports/2008/HPL-2008-48R1.pdf" rel="nofollow">Collaborative filtering for implicit feedback datasets</a></li>
<li>Rendle et al.: <a href="http://arxiv.org/pdf/1205.2618" rel="nofollow">BPR: Bayesian personalized ranking from implicit feedback</a></li>
<li>Pan et al.: <a href="http://www.hpl.hp.com/techreports/2008/HPL-2008-48R1.pdf" rel="nofollow">One-class collaborative filtering</a></li>
</ol> | 2015-02-02 11:01:39.143000+00:00 | 2015-02-02 11:01:39.143000+00:00 | null | null | 28,262,561 | <p>I use 0-1 data to train matrix factorization (MF) model and use recall to eval the performance. For zero data, we can interpret as two ways. First, user does not like it, roughly. Second, user does not know about it or does not like it. In the first condition, I sampling random negative samples and use gradient descend. In the latter case, I use confidential parameters and iteratively update analytic expression. I found they give totally different results and I am puzzled by this. Could someone help me?</p> | 2015-02-01 12:11:31.207000+00:00 | 2015-02-02 11:01:39.143000+00:00 | 2015-02-01 12:17:26.970000+00:00 | recommendation-engine | ['http://www.hpl.hp.com/techreports/2008/HPL-2008-48R1.pdf', 'http://arxiv.org/pdf/1205.2618', 'http://www.hpl.hp.com/techreports/2008/HPL-2008-48R1.pdf'] | 3 |
50,516,531 | <p>The original YOLO network is inspired by the GoogLeNet model.
There is no R-CNN in YOLO. They both use very different techniques to detect objects in images. R-CNN for instance predicts bounding boxes for certain regions and checks with a classifier if there is an object. YOLO on the other hand will look at the full image and make it's predictions from there. </p>
<p>For more information you should read the original paper: <a href="https://arxiv.org/pdf/1506.02640.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1506.02640.pdf</a></p> | 2018-05-24 19:07:15.817000+00:00 | 2018-05-24 19:07:15.817000+00:00 | null | null | 50,324,900 | <p>I tried to find which convolutional neural network is used in Yolo project but, unfortunately, I did not get this information. Which system for object detection is used in Yolo? R-CNN?</p> | 2018-05-14 07:18:34.773000+00:00 | 2018-05-24 19:07:15.817000+00:00 | null | object-detection|image-recognition|convolutional-neural-network|yolo | ['https://arxiv.org/pdf/1506.02640.pdf'] | 1 |
55,291,384 | <p>In short, build a <a href="https://en.wikipedia.org/wiki/Multiclass_classification" rel="nofollow noreferrer">multi-class</a> or <a href="https://en.wikipedia.org/wiki/Multi-label_classification" rel="nofollow noreferrer">multi-label classification</a> model. Then <a href="https://en.wikipedia.org/wiki/Platt_scaling" rel="nofollow noreferrer">calibrate</a> your model outputs. Either <code>Word2Vec</code> or <code>Bag-of-words</code> model can be used to build such a model. </p>
<p>Longer version. See the figure below. This is Figure 1 from <a href="https://arxiv.org/abs/1706.04599" rel="nofollow noreferrer">this</a> paper. The output from your model would be logits and you could apply a softmax (multi-class) or sigmoid (multi-label) transform on the logits. If you want more confidence on the classifier output, the calibration step described in the paper is probably what you want to perform. This step is to convert the classifier output into a representation of the likelihood of true correctness using additional validation dataset.</p>
<p><img src="https://i.stack.imgur.com/6JLGl.png" alt="Figure1 from paper"></p> | 2019-03-22 00:57:05.857000+00:00 | 2019-03-22 01:06:10.223000+00:00 | 2019-03-22 01:06:10.223000+00:00 | null | 55,172,124 | <p>I am creating a python model that will classify a given document based on the text. Because each document still needs to be manually reviewed by a human, I am creating a suggestion platform that will give the user the top n-classes that a given document belongs too. Additionally each document can belong to more than one class. I have a training set of documents filled with rich text and their tags.</p>
<p>What I would like to do is perform a regression on each document to get a probabilistic score of each classification and return the top 5 highest scored classes.</p>
<p>I have looked into Bayes classification models, and recommendation systems and I think a logistic regression will help be better as it returns a score. I am new to machine learning and would appreciate any advice or examples that is modeled after this kind of problem. Thank you.</p>
<p>EDIT: Specifically, my problem is how should I parse my text data for ML modeling with logistic regression? Do I need to represent my text in a vector format using Word2Vec/Doc2Vec or a Bag-of-words model?</p> | 2019-03-14 21:15:07.853000+00:00 | 2019-03-22 01:06:10.223000+00:00 | 2019-03-18 18:25:07.917000+00:00 | python|nlp|logistic-regression|text-classification | ['https://en.wikipedia.org/wiki/Multiclass_classification', 'https://en.wikipedia.org/wiki/Multi-label_classification', 'https://en.wikipedia.org/wiki/Platt_scaling', 'https://arxiv.org/abs/1706.04599'] | 4 |
36,888,858 | <p>The package <a href="https://github.com/giordano/PolynomialRoots.jl" rel="nofollow"><code>PolynomialRoots.jl</code></a> provides the function <code>roots()</code> to find all (real and complex) roots of polynomials of any order. The only mandatory argument is the array with coefficients of the polynomial in ascending order.</p>
<p>For example, in order to find the roots of</p>
<pre><code>6x^5 + 5x^4 + 3x^2 + 2x + 1
</code></pre>
<p>after loading the package (<code>using PolynomialRoots</code>) you can use</p>
<pre class="lang-julia prettyprint-override"><code>julia> roots([1, 2, 3, 4, 5, 6])
5-element Array{Complex{Float64},1}:
0.294195-0.668367im
-0.670332+2.77556e-17im
0.294195+0.668367im
-0.375695-0.570175im
-0.375695+0.570175im
</code></pre>
<p>The package is a Julia implementation of the root-finding algorithm described in this paper: <a href="http://arxiv.org/abs/1203.1034" rel="nofollow">http://arxiv.org/abs/1203.1034</a></p>
<p><code>PolynomialRoots.jl</code> has also support for arbitrary precision calculation. This is useful for solving equation that cannot be solved in double precision. For example</p>
<pre class="lang-julia prettyprint-override"><code>julia> r = roots([94906268.375, -189812534, 94906265.625]);
julia> (r[1], r[2])
(1.0000000144879793 - 0.0im,1.0000000144879788 + 0.0im)
</code></pre>
<p>gives the wrong result for the polynomial, instead passing the input array in arbitrary precision forces arbitrary precision calculations that provide the right answer (see <a href="https://en.wikipedia.org/wiki/Loss_of_significance" rel="nofollow">https://en.wikipedia.org/wiki/Loss_of_significance</a>):</p>
<pre class="lang-julia prettyprint-override"><code>julia> r = roots([BigFloat(94906268.375), BigFloat(-189812534), BigFloat(94906265.625)]);
julia> (Float64(r[1]), Float64(r[2]))
(1.0000000289759583,1.0)
</code></pre> | 2016-04-27 11:46:47.200000+00:00 | 2016-05-03 20:12:26.200000+00:00 | 2016-05-03 20:12:26.200000+00:00 | null | 22,588,709 | <p>All, </p>
<p>I've just been starting to play around with the Julia language and am enjoying it quite a bit. At the end of the 3rd tutorial there's an interesting problem: genericize the quadratic formula such that it solves for the roots of any <a href="http://forio.com/products/julia-studio/tutorials/beginner/3/" rel="nofollow">n-order polynomial equation</a>.</p>
<p>This struck me as (a) an interesting programming problem and (b) an interesting Julia problem. Has anyone out there solved this one? For reference, here is the Julia code with a couple toy examples. Again, the idea is to make this generic for any n-order polynomial. </p>
<p>Cheers,</p>
<p>Aaron </p>
<pre><code>function derivative(f)
return function(x)
# pick a small value for h
h = x == 0 ? sqrt(eps(Float64)) : sqrt(eps(Float64)) * x
# floating point arithmetic gymnastics
xph = x + h
dx = xph - x
# evaluate f at x + h
f1 = f(xph)
# evaluate f at x
f0 = f(x)
# divide the difference by h
return (f1 - f0) / dx
end
end
function quadratic(f)
f1 = derivative(f)
c = f(0.0)
b = f1(0.0)
a = f(1.0) - b - c
return (-b + sqrt(b^2 - 4a*c + 0im))/2a, (-b - sqrt(b^2 - 4a*c + 0im))/2a
end
quadratic((x) -> x^2 - x - 2)
quadratic((x) -> x^2 + 2)
</code></pre> | 2014-03-23 08:43:19.847000+00:00 | 2019-03-27 19:08:09.157000+00:00 | 2019-03-27 19:08:09.157000+00:00 | julia | ['https://github.com/giordano/PolynomialRoots.jl', 'http://arxiv.org/abs/1203.1034', 'https://en.wikipedia.org/wiki/Loss_of_significance'] | 3 |
39,625,795 | <p>For reference, the <code>hard sigmoid function</code> may be defined differently in different places. In Courbariaux et al. 2016 [1] it's defined as:</p>
<blockquote>
<p>σ is the “hard sigmoid” function: σ(x) = clip((x + 1)/2, 0, 1) =
max(0, min(1, (x + 1)/2))</p>
</blockquote>
<p>The intent is to provide a probability value (hence constraining it to be between <code>0</code> and <code>1</code>) for use in stochastic binarization of neural network parameters (e.g. weight, activation, gradient). You use the probability <code>p = σ(x)</code> returned from the hard sigmoid function to set the parameter <code>x</code> to <code>+1</code> with <code>p</code> probability, or <code>-1</code> with probability <code>1-p</code>.</p>
<p>[1] <a href="https://arxiv.org/abs/1602.02830" rel="nofollow">https://arxiv.org/abs/1602.02830</a> - "Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1", Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio, (Submitted on 9 Feb 2016 (v1), last revised 17 Mar 2016 (this version, v3))</p> | 2016-09-21 20:14:11.457000+00:00 | 2016-09-21 20:14:11.457000+00:00 | null | null | 35,411,194 | <p>I am working on Deep Nets using keras. There is an activation "hard sigmoid". Whats its mathematical definition ?</p>
<p>I know what is Sigmoid. Someone asked similar question on Quora: <a href="https://www.quora.com/What-is-hard-sigmoid-in-artificial-neural-networks-Why-is-it-faster-than-standard-sigmoid-Are-there-any-disadvantages-over-the-standard-sigmoid" rel="noreferrer">https://www.quora.com/What-is-hard-sigmoid-in-artificial-neural-networks-Why-is-it-faster-than-standard-sigmoid-Are-there-any-disadvantages-over-the-standard-sigmoid</a></p>
<p>But I could not find the precise mathematical definition anywhere ?</p> | 2016-02-15 13:54:11.357000+00:00 | 2018-09-13 12:49:52.120000+00:00 | 2017-08-24 12:54:23.583000+00:00 | math|tensorflow|deep-learning|keras|theano | ['https://arxiv.org/abs/1602.02830'] | 1 |
906,326 | <p>There is an interesting twist when you want to know the Unix Epoch time in .Net on a Windows system.</p>
<p>For nearly all practical cases and assuming the current time is past the Unix Epoch you could indeed take </p>
<pre><code>System.TimeSpan timeDifference = DateTime.UTCNow -
new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc);
long unixEpochTime = System.Convert.ToInt64(timeDifference.TotalSeconds);
</code></pre>
<p>But,</p>
<p>Unix Epoch Time is defined as "... a system for describing points in time, defined as the number of seconds elapsed since midnight Coordinated Universal Time (UTC) of January 1, 1970, <strong>not counting leap seconds</strong>." (1)</p>
<p>Since 1972, UTC has included "leap seconds", and we have had a total of 25 of them so far. (2)</p>
<p>The .Net DateTime has no provisions for Leap Seconds, but will simply rely on the OS time. Windows is blissfully unaware of Leap Seconds (3)(4), and so will just have the notion of time as it receives it from its NTP master (I believe the default for a non-domain connected machine is time.windows.com ), which is probably serving up UTC including leap seconds.</p>
<p>This means that in order to be pedantically correct about the real number of seconds passed since the Unix epoch, you should probably add the leap seconds to the result obtained above for applications that rely on this. You would have to track the number of seconds to add at each time since leap seconds are not announced far in advance (2). However, as the definition of Unix Epoch Time explicitly excludes leap seconds, you can safely ignore this and simply recalculate seconds from the current UTC time. </p>
<p>Sometimes, leap seconds do cause software mayhem (5). The debate over whether to keep or eliminate the practice is ongoing (6)(7)(8). </p>
<p>The last leap second at the time of the answer occurred on the 1st of July 2012 (9) and caused problems for various sites and applications (10)</p>
<p>(1) <a href="http://en.wikipedia.org/wiki/Unix_time" rel="noreferrer">http://en.wikipedia.org/wiki/Unix_time</a></p>
<p>(2) <a href="http://en.wikipedia.org/wiki/Leap_second" rel="noreferrer">http://en.wikipedia.org/wiki/Leap_second</a></p>
<p>(3) <a href="http://support.microsoft.com/kb/909614" rel="noreferrer">http://support.microsoft.com/kb/909614</a></p>
<p>(4) <a href="http://www.meinberg.de/english/info/leap-second.htm" rel="noreferrer">http://www.meinberg.de/english/info/leap-second.htm</a></p>
<p>(5) <a href="http://www.networkworld.com/news/2009/010609-leap-second-snafu-affects-oracle.html" rel="noreferrer">http://www.networkworld.com/news/2009/010609-leap-second-snafu-affects-oracle.html</a></p>
<p>(6) <a href="http://www.pcworld.idg.com.au/article/358024/time_waits_no_one_leap_seconds_may_cut/" rel="noreferrer">http://www.pcworld.idg.com.au/article/358024/time_waits_no_one_leap_seconds_may_cut/</a></p>
<p>(7) <a href="http://queue.acm.org/detail.cfm?id=1967009" rel="noreferrer">http://queue.acm.org/detail.cfm?id=1967009</a></p>
<p>(8) <a href="http://arxiv.org/abs/1106.3141" rel="noreferrer">http://arxiv.org/abs/1106.3141</a></p>
<p>(9) <a href="http://hpiers.obspm.fr/iers/bul/bulc/bulletinc.dat" rel="noreferrer">http://hpiers.obspm.fr/iers/bul/bulc/bulletinc.dat</a></p>
<p>(10) <a href="http://arstechnica.com/business/2012/07/one-day-later-the-leap-second-v-the-internet-scorecard/" rel="noreferrer">http://arstechnica.com/business/2012/07/one-day-later-the-leap-second-v-the-internet-scorecard/</a></p>
<p>(The original answer had a mistake, which was thankfully caught by the commenters Edward Brey and Mormegil below)</p> | 2009-05-25 11:12:07.040000+00:00 | 2015-07-03 14:43:26.497000+00:00 | 2015-07-03 14:43:26.497000+00:00 | null | 906,034 | <p>I was able to find example code to get the current timestamp in Linux Epoch (Seconds since Midnight Jan 1st 1970), however I am having trouble finding an example as to how to calculate what the Epoch will be in the future, say for example 10 minutes from now, so how can I calculate a future time in Linux Epoch?</p> | 2009-05-25 09:21:46.733000+00:00 | 2015-07-03 14:43:26.497000+00:00 | null | c#|time|epoch | ['http://en.wikipedia.org/wiki/Unix_time', 'http://en.wikipedia.org/wiki/Leap_second', 'http://support.microsoft.com/kb/909614', 'http://www.meinberg.de/english/info/leap-second.htm', 'http://www.networkworld.com/news/2009/010609-leap-second-snafu-affects-oracle.html', 'http://www.pcworld.idg.com.au/article/358024/time_waits_no_one_leap_seconds_may_cut/', 'http://queue.acm.org/detail.cfm?id=1967009', 'http://arxiv.org/abs/1106.3141', 'http://hpiers.obspm.fr/iers/bul/bulc/bulletinc.dat', 'http://arstechnica.com/business/2012/07/one-day-later-the-leap-second-v-the-internet-scorecard/'] | 10 |
57,367,535 | <p>See</p>
<p><a href="https://arxiv.org/abs/1911.02696" rel="nofollow noreferrer">Efficient Computation of Positional Population Counts Using SIMD Instructions</a> by Marcus D. R. Klarqvist, Wojciech Muła, Daniel Lemire (7 Nov 2019)</p>
<p><a href="https://arxiv.org/abs/1611.07612" rel="nofollow noreferrer">Faster Population Counts using AVX2 Instructions</a> by Wojciech Muła, Nathan Kurz, Daniel Lemire (23 Nov 2016).</p>
<p>Basically, each full adder compresses 3 inputs to 2 outputs. So one can eliminate an entire 256-bit word for the price of 5 logic instructions. The full adder operation could be repeated until registers become exhausted. Then results in the registers are accumulated (as seen in most of the other answers). </p>
<p>Positional popcnt for 16-bit subwords is implemented here:
<a href="https://github.com/mklarqvist/positional-popcount" rel="nofollow noreferrer">https://github.com/mklarqvist/positional-popcount</a></p>
<pre><code>// Carry-Save Full Adder (3:2 compressor)
b ^= a;
a ^= c;
c ^= b; // xor sum
b |= a;
b ^= c; // carry
</code></pre>
<p><em>Note: the accumulate step for positional-popcnt is more expensive than for normal <a href="https://github.com/kimwalisch/libpopcnt" rel="nofollow noreferrer">simd popcnt</a>. Which I believe makes it feasible to add a couple of half-adders to the end of the CSU, it might pay to go all the way up to 256 words before accumulating.</em></p> | 2019-08-06 00:38:30.957000+00:00 | 2019-11-11 22:30:23.583000+00:00 | 2019-11-11 22:30:23.583000+00:00 | null | 55,081,525 | <p>(Related: <a href="https://stackoverflow.com/questions/7793997/how-to-quickly-count-bits-into-separate-bins-in-a-series-of-ints-on-sandy-bridge">How to quickly count bits into separate bins in a series of ints on Sandy Bridge?</a> is an earlier duplicate of this, with some different answers. Editor's note: the answers here are probably better.</p>
<p>Also, an AVX2 version of a similar problem, with many bins for a whole row of bits much wider than one <code>uint64_t</code>: <a href="https://stackoverflow.com/questions/58486138/improve-column-population-count-algorithm">Improve column population count algorithm</a>)</p>
<hr>
<p>I am working on a project in C where I need to go through tens of millions of masks (of type ulong (64-bit)) and update an array (called <code>target</code>) of 64 short integers (uint16) based on a simple rule:</p>
<pre><code>// for any given mask, do the following loop
for (i = 0; i < 64; i++) {
if (mask & (1ull << i)) {
target[i]++
}
}
</code></pre>
<p>The problem is that I need do the above loops on tens of millions of masks and I need to finish in less than a second. Wonder if there are any way to speed it up, like using some sort special assembly instruction that represents the above loop. </p>
<p>Currently I use gcc 4.8.4 on ubuntu 14.04 (i7-2670QM, supporting AVX, not AVX2) to compile and run the following code and took about 2 seconds. Would love to make it run under 200ms.</p>
<pre><code>#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/time.h>
#include <sys/stat.h>
double getTS() {
struct timeval tv;
gettimeofday(&tv, NULL);
return tv.tv_sec + tv.tv_usec / 1000000.0;
}
unsigned int target[64];
int main(int argc, char *argv[]) {
int i, j;
unsigned long x = 123;
unsigned long m = 1;
char *p = malloc(8 * 10000000);
if (!p) {
printf("failed to allocate\n");
exit(0);
}
memset(p, 0xff, 80000000);
printf("p=%p\n", p);
unsigned long *pLong = (unsigned long*)p;
double start = getTS();
for (j = 0; j < 10000000; j++) {
m = 1;
for (i = 0; i < 64; i++) {
if ((pLong[j] & m) == m) {
target[i]++;
}
m = (m << 1);
}
}
printf("took %f secs\n", getTS() - start);
return 0;
}
</code></pre>
<p>Thanks in advance!</p> | 2019-03-09 20:13:47.997000+00:00 | 2020-01-13 20:37:13.753000+00:00 | 2019-10-21 19:41:30.183000+00:00 | c|optimization|x86|x86-64|simd | ['https://arxiv.org/abs/1911.02696', 'https://arxiv.org/abs/1611.07612', 'https://github.com/mklarqvist/positional-popcount', 'https://github.com/kimwalisch/libpopcnt'] | 4 |
39,461,482 | <p>I would strongly recommend you to use the <a href="https://en.wikipedia.org/wiki/Convolutional_neural_network" rel="nofollow noreferrer">Convolutional Neural Network(CNN)</a> to solve this 10-class image classification problem, since you can obtain much images for the "products".
The pipeline will be very similar to that of a image classification problem using CNN such as <a href="http://s3.amazonaws.com/academia.edu.documents/30766359/10.1.1.41.6835.pdf?AWSAccessKeyId=AKIAJ56TQJRTWSMTNPEA&Expires=1473733206&Signature=F6QrJ9%2Fw01NuZgy9622H%2BswdW7U%3D&response-content-disposition=inline%3B%20filename%3DLearning_algorithms_for_classification_A.pdf" rel="nofollow noreferrer">handwritten digit recognition</a>.</p>
<p>For your question, in fact, it would be better to crop the "products" and then resize them to the same size to train a CNN classifier. And at the recognition(or prediction) phase, you should also crop the product and resize it to that size to feed it into the pre-trained classifier. Benefits of this preprocessing procedure include:</p>
<ul>
<li>greatly reduce the degree of difficulty for recognition and improve accuracy.</li>
<li>properly smaller image size needs less computation and memory consumption while the corresponding classifier still can have a competitive(or same) accuracy. </li>
</ul>
<p>For the "scale-variant image recognition" problem, in fact, as mentioned above, at the recognition phase you should also crop the product and resize it to the same size as that of training your CNN, so the scale would not change violently. On the other hand, you can perform <a href="http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf" rel="nofollow noreferrer">data augmentation</a> and <a href="http://arxiv.org/vc/arxiv/papers/1501/1501.02876v1.pdf" rel="nofollow noreferrer">more augmentation methods</a> before training CNN to improve the CNN's robustness to scale-variance. Here is an example for face data augmentation, from left to right are <code>normal</code>, <code>zoom out</code>, <code>zoom in</code>, <code>rotate</code> seprately and you can make it more:</p>
<p><a href="https://i.stack.imgur.com/Jvwlo.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jvwlo.jpg" alt="normal"></a> <a href="https://i.stack.imgur.com/5gawM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5gawM.jpg" alt="zoom out"></a> <a href="https://i.stack.imgur.com/1YXjr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1YXjr.jpg" alt="zoom in"></a> <a href="https://i.stack.imgur.com/9k881.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9k881.jpg" alt="rotate"></a></p>
<p>Hope my expression is clear and will help you.</p> | 2016-09-13 02:43:37.077000+00:00 | 2016-11-14 12:53:47.170000+00:00 | 2016-11-14 12:53:47.170000+00:00 | null | 39,453,421 | <p>I have a task related to image recognition, and the task is to tell which product is based on thousands photos taken for a wide variety of products. </p>
<p>For example, we have taken short videos (1 minute) for 10 different labeled products. And then we use cv2.VideoCapture to convert them into 60s * 30fps ~ 1,800 frames per product. So we have about 18K different images for 10 products all perfectly labeled. </p>
<p>I am thinking about turning images into pixels and use the label as the outcome and all the pixels as income to use machine learning (neural net) to turn this into a classification problem. However, each image is 1080 * 1920 which gives you 2 million pixels, let about the color (RGB,..etc). </p>
<p>Is there any standard technique which I should use? I can do edge detection, contour to crop them to a smaller size but then all the pictures will end up in different size, isn't it? If I scale it all to be the same size, won't that all change the scale-variant image recognition problems? </p>
<p>I own those products so I can take as many photos as I want. Sorry this is more like a best-practice or architectural question instead of a specific programming questions. </p>
<p>This is picture that scaled down to be smaller so you can have a sense of what problem I am trying to solve. </p>
<p><a href="https://i.stack.imgur.com/tgCUU.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tgCUU.jpg" alt="enter image description here"></a></p> | 2016-09-12 15:05:48.573000+00:00 | 2016-11-14 12:53:47.170000+00:00 | 2016-09-13 01:01:39.223000+00:00 | opencv|image-processing|machine-learning|scikit-learn|image-recognition | ['https://en.wikipedia.org/wiki/Convolutional_neural_network', 'http://s3.amazonaws.com/academia.edu.documents/30766359/10.1.1.41.6835.pdf?AWSAccessKeyId=AKIAJ56TQJRTWSMTNPEA&Expires=1473733206&Signature=F6QrJ9%2Fw01NuZgy9622H%2BswdW7U%3D&response-content-disposition=inline%3B%20filename%3DLearning_algorithms_for_classification_A.pdf', 'http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf', 'http://arxiv.org/vc/arxiv/papers/1501/1501.02876v1.pdf', 'https://i.stack.imgur.com/Jvwlo.jpg', 'https://i.stack.imgur.com/5gawM.jpg', 'https://i.stack.imgur.com/1YXjr.jpg', 'https://i.stack.imgur.com/9k881.jpg'] | 8 |
31,780,488 | <p>If you are looking for a fast Java (or Python) POS tagger, you might consider to use <a href="http://rdrpostagger.sourceforge.net/" rel="nofollow">RDRPOSTagger</a>. RDRPOSTagger is a robust, easy-to-use and language-independent toolkit for POS and morphological tagging. It obtains fast performance in both learning and tagging process. For example in Java, tagging speed is 90K English words/second using a computer with Core2Duo 2.4 GHz. And it achieves a very competitive accuracy in comparison to the state-of-the-art results. See experimental results including performance speed and tagging accuracy on 13 languages in <a href="http://arxiv.org/abs/1412.4021" rel="nofollow">this paper</a>.</p> | 2015-08-03 06:13:10.717000+00:00 | 2015-11-21 06:03:24.873000+00:00 | 2015-11-21 06:03:24.873000+00:00 | null | 29,582,958 | <p>I am just playing around with Part-of-speech Tagging, and started using OpenNLP. </p>
<p>I am using the following code to load the model (Java):</p>
<pre><code> m_modelFile = new FileInputStream("c:\\DATA\\en-parser-chunking.bin");
m_model = new ParserModel(m_modelFile);
m_parser = ParserFactory.create(m_model);
...
Parse topParses[] = ParserTool.parseLine(sentence, m_parser, 1);
</code></pre>
<p>I am noticing that the call to create the ParserModel object is insanely slow. Could be b/c en-parser-chunking.bin is 35MB in size. Is there a better way to use this so that it's not this slow? Alternatively, is there a POS tagger you recommend or a way of calling the API that's faster?</p>
<p>I've been playing around with the accuracy, and it's pretty good. But, I am not happy with the performance when loading the model...</p>
<p>Thanks guys.</p> | 2015-04-11 20:58:21.020000+00:00 | 2015-11-21 06:03:24.873000+00:00 | null | opennlp|pos-tagger | ['http://rdrpostagger.sourceforge.net/', 'http://arxiv.org/abs/1412.4021'] | 2 |
39,208,882 | <p>This is going to be a long story, possibly more suited to <a href="https://stats.stackexchange.com/">https://stats.stackexchange.com/</a>.</p>
<p>====== Part 1 -- The problem ======</p>
<p>This is the sequence generating the error:</p>
<pre><code>library(fExtremes)
samp <- read.csv("optimdata.csv")[ ,2]
## does not converge
para <- gevFit(samp, type = "mle")
</code></pre>
<p>We are facing the typical cause of lack-of-convergence when using <code>optim()</code> and friends: inadequate starting values for the optimisation.</p>
<p>To see what goes wrong, let us use the PWM estimator (<a href="http://arxiv.org/abs/1310.3222" rel="nofollow noreferrer">http://arxiv.org/abs/1310.3222</a>); this consists of an analytical formula, hence it does not incur into convergence problems, since it makes no use of <code>optim()</code>:</p>
<pre><code>para <- gevFit(samp, type = "pwm")
fitpwm<- attr(para, "fit")
fitpwm$par.ests
</code></pre>
<p>The estimated tail parameter <code>xi</code> is negative, corresponding to a bounded upper tail; in fact the fitted distribution displays even more "upper tail boundedness" than the sample data, as you can see from the "leveling off" of the quantile-quantile graph at the right:</p>
<pre><code>qqgevplot <- function(samp, params){
probs <- seq(0.1,0.99,by=0.01)
qqempir <- quantile(samp, probs)
qqtheor <- qgev(probs, xi=params["xi"], mu=params["mu"], beta=params["beta"])
rang <- range(qqempir,qqtheor)
plot(qqempir, qqtheor, xlim=rang, ylim=rang,
xlab="empirical", ylab="theoretical",
main="Quantile-quantile plot")
abline(a=0,b=1, col=2)
}
qqgevplot(samp, fitpwm$par.ests)
</code></pre>
<p>For <code>xi<0.5</code> the MLE estimator is not regular (<a href="http://arxiv.org/abs/1301.5611" rel="nofollow noreferrer">http://arxiv.org/abs/1301.5611</a>): the value of -0.46 estimated by PWM for <code>xi</code> is very close to that. Now the PWM estimates are used internally by <code>gevFit()</code> as starting values for <code>optim()</code>: you can see this if you print out the code for the function <code>gevFit()</code>:</p>
<pre><code>print(gevFit)
print(.gevFit)
print(.gevmleFit)
</code></pre>
<p>The starting value for optim is <code>theta</code>, obtained by PWM. For the specific data at hand, this starting value is not adequate, in that it leads to non-convergence of <code>optim()</code>.</p>
<p>====== Part 2 -- solutions? ======</p>
<p>Solution 1 is to use <code>para <- gevFit(samp, type = "pwm")</code> as above. If you'd like to use ML, then you have to specify good starting values for <code>optim()</code>. Unfortunately, the <code>fExtremes</code> package does not make it easy to do so. You can then re-define your own version of <code>.gevmleFit</code> to include those, e.g.</p>
<pre><code>.gevmleFit <- function (data, block = NA, start.param, ...)
{
data = as.numeric(data)
n = length(data)
if(missing(start.param)){
theta = .gevpwmFit(data)$par.ests
}else{
theta = start.param
}
fit = optim(theta, .gevLLH, hessian = TRUE, ..., tmp = data)
if (fit$convergence)
warning("optimization may not have succeeded")
par.ests = fit$par
varcov = solve(fit$hessian)
par.ses = sqrt(diag(varcov))
ans = list(n = n, data = data, par.ests = par.ests, par.ses = par.ses,
varcov = varcov, converged = fit$convergence, nllh.final = fit$value)
class(ans) = "gev"
ans
}
## diverges, just as above
.gevmleFit(samp)
## diverges, just as above
startp <- fitpwm$par.ests
.gevmleFit(samp, start.param=startp)
## converges
startp <- structure(c(-0.1, 1, 1), names=names(fitpwm$par.ests))
.gevmleFit(samp, start.param=startp)$par.ests
</code></pre>
<p>Now check this out: the <code>beta</code> estimated by PWM is 0.1245; by changing this to a tiny amount, the MLE is made to converge:</p>
<pre><code>startp <- fitpwm$par.ests
startp["beta"]
startp["beta"] <- 0.13
.gevmleFit(samp, start.param=startp)$par.ests
</code></pre>
<p>This hopefully clearly illustrates that to blindly <code>optim()</code>ise works until it doesn't and might then turn into a quite delicate endeavour indeed. For this reason, it might be useful to leave this reply here, rather than to migrate to CrossValidated.</p> | 2016-08-29 14:37:37.417000+00:00 | 2016-08-29 14:37:37.417000+00:00 | 2017-04-13 12:44:13.910000+00:00 | null | 39,111,048 | <p>I am using R {fExtremes} to find best parameters of GEV distribution for my data (a vector). but get the following error message</p>
<blockquote>
<p>Error in solve.default(fit$hessian) : Lapack routine dgesv: system is exactly singular: U[1,1] = 0</p>
</blockquote>
<p>I traced back to fit$hessian, found my hessian matrix is a sigular matrix, all of the elements are 0s. The source code (<a href="https://github.com/cran/fExtremes/blob/master/R/GevFit.R" rel="nofollow">https://github.com/cran/fExtremes/blob/master/R/GevFit.R</a>) of gevFit() shows fit$hessian is calculated by optim(). The output parameters are exactly the same value as the initial parameters. I am wondering what could be the problems of my data that cause this problem? I copied my code here </p>
<pre><code>> min(sample);
[1] 5.240909
> max(sample)
[1] 175.8677
> length(sample)
[1] 6789
> mean(sample)
[1] 78.04107
>para<-gevFit(sample, type = "mle")
Error in solve.default(fit$hessian) :
Lapack routine dgesv: system is exactly singular: U[1,1] = 0
fit = optim(theta, .gumLLH, hessian = TRUE, ..., tmp = data)
> fit
$par
xi -0.3129225
mu 72.5542497
beta 16.4450897
$value
[1] 1e+06
$counts
function gradient
4 NA
$convergence
[1] 0
$message
NULL
$hessian
xi mu beta
xi 0 0 0
mu 0 0 0
beta 0 0 0
</code></pre>
<p>I updated my dataset on google docs:
<a href="https://docs.google.com/spreadsheets/d/1IRRpjmdrrJPhNmfiLism_P0efV_Ot4HlEsa6kwMnljc/edit?usp=sharing" rel="nofollow">https://docs.google.com/spreadsheets/d/1IRRpjmdrrJPhNmfiLism_P0efV_Ot4HlEsa6kwMnljc/edit?usp=sharing</a></p> | 2016-08-23 21:34:26.063000+00:00 | 2016-08-29 14:37:37.417000+00:00 | 2016-08-28 16:57:16.027000+00:00 | r|math|optimization|hessian|mle | ['https://stats.stackexchange.com/', 'http://arxiv.org/abs/1310.3222', 'http://arxiv.org/abs/1301.5611'] | 3 |
30,614,547 | <p>Johansson et al. have recently presented a system for theory exploration, that is, coming up with lemmas based on your definitions. You can find their implementation on <a href="https://github.com/moajohansson/IsaHipster" rel="nofollow">GitHub</a> and the paper on <a href="http://arxiv.org/pdf/1405.3426" rel="nofollow">arXiv</a>. In the paper, you will also find a lot of examples. The only drawback is that, as far as I can tell, their implementation only works with Isabelle2013-2.</p>
<p><em>Johansson, Moa, et al. "Hipster: Integrating Theory Exploration in a Proof Assistant." Intelligent Computer Mathematics. Springer International Publishing, 2014. 108-122.</em></p> | 2015-06-03 08:19:49.783000+00:00 | 2015-06-03 08:19:49.783000+00:00 | null | null | 30,551,776 | <p>i do not have example, but i googled some people can
use Isabelle to search lemma and discover new lemma with Isabelle</p>
<p>do not know where give hints to discover or search next lemma after current lemma proved automatically</p>
<p>can you give examples that how to discover lemmas?</p> | 2015-05-30 21:57:08.613000+00:00 | 2015-06-03 08:19:49.783000+00:00 | null | isabelle | ['https://github.com/moajohansson/IsaHipster', 'http://arxiv.org/pdf/1405.3426'] | 2 |
52,690,803 | <p>Your numbers are too small to be meaningful. The difference between 166 ms and 196 ms is, in absolute terms, tiny. Who knows what other factors could be influencing that? VM warmup time, differences in memory allocation, or any host of other things could easily cause a discrepancy of that size. To be sure, you should make the numbers much bigger.</p>
<p>On my machine, running Racket v7.0, I increased the arguments from <code>40000000</code> to <code>1000000000</code> and ran the program. The results were 2.361 s for the internal definition case and 2.212 s for the external definition case. Given the sorts of factors listed above, that difference is too small to be meaningful.</p>
<p>Benchmarking is hard, and benchmarking languages that run on VMs and are JIT compiled is harder. Even if you account for warmup and GC, run lots of iterations and take the averages, and generally try to do things right, the results you get could still be nearly meaningless, as the 2017 OOPSLA paper <a href="https://arxiv.org/pdf/1602.00602.pdf" rel="nofollow noreferrer">Virtual Machine Warmup Blows Hot and Cold</a> explains:</p>
<blockquote>
<p>Virtual Machines (VMs) with Just-In-Time (JIT) compilers are traditionally thought to execute programs in two phases: the initial warmup phase determines which parts of a program would most benefit from dynamic compilation, before JIT compiling those parts into machine code; subsequently the program is said to be at a steady state of peak performance. Measurement methodologies almost always discard data collected during the warmup phase such that reported measurements focus entirely on peak performance. We introduce a fully automated statistical approach, based on changepoint analysis, which allows us to determine if a program has reached a steady state and, if so, whether that represents peak performance or not. Using this, we show that <strong>even when run in the most controlled of circumstances, small, deterministic, widely studied microbenchmarks often fail to reach a steady state of peak performance on a variety of common VMs</strong>. Repeating our experiment on 3 different machines, we found that <strong>at most 43.5%</strong> of ⟨VM, benchmark⟩ pairs consistently reach a steady state of peak performance.</p>
</blockquote>
<p>Emphasis mine. Make sure you’re measuring what you think you’re measuring.</p> | 2018-10-07 16:57:37.090000+00:00 | 2018-10-07 16:57:37.090000+00:00 | null | null | 52,688,942 | <p>I tried running the program below</p>
<pre><code>(define (odd-internal x)
(define (even x)
(if (zero? x)
#t
(odd-internal (sub1 x))))
(if (zero? x)
#f
(even (sub1 x))))
(define (odd-external x)
(if (zero? x)
#f
(even (sub1 x))))
(define (even x)
(if (zero? x)
#t
(odd-external (sub1 x))))
(begin (display "Using internal definition\n")
(time (odd-internal 40000000)))
(begin (display "Using external definition\n")
(time (odd-external 40000000)))
</code></pre>
<p>This is the result in Racket</p>
<pre><code>Using internal definition
cpu time: 166 real time: 165 gc time: 0
#f
Using external definition
cpu time: 196 real time: 196 gc time: 0
#f
</code></pre>
<p>There you can see using internal definition is quite a bit faster. I've tried running on Chez Scheme and the result is similar. Why is that?</p> | 2018-10-07 13:26:16.597000+00:00 | 2018-10-07 22:05:57.053000+00:00 | null | performance|scheme|racket | ['https://arxiv.org/pdf/1602.00602.pdf'] | 1 |
73,024,230 | <p>As answered by others, there is a pure functional implementation of standard min-heap proposed in <a href="https://arxiv.org/abs/1312.4666" rel="nofollow noreferrer">the paper</a> of Vladimir Kostyukov. Following is a reimplementation in F#:</p>
<pre><code>type heap<'t> =
| Leaf
| Branch of 't * heap<'t> * heap<'t>
let rec height hp =
match hp with
| Branch (_, l, r) -> 1 + max (height l) (height r)
| _ -> 0
let rec iscomplete hp =
match hp with
| Branch (_, l, r) -> iscomplete l && iscomplete r && height l = height r
| _ -> true
// push x into the heap hp
let rec insert hp x =
match hp with
| Leaf -> Branch(x, Leaf, Leaf)
| Branch (v, l, r) ->
let fixroot v l r =
match l, r with
| Branch (v', l', r'), _ when v' < v -> Branch(v', Branch(v, l', r'), r)
| _, Branch (v', l', r') when v' < v -> Branch(v', l, Branch(v, l', r'))
| _ -> Branch(v, l, r)
if height l = height r then
if iscomplete r then
fixroot v (insert l x) r
else
fixroot v l (insert r x)
else if iscomplete l then
fixroot v (insert l x) r
else
fixroot v l (insert r x)
let rec trickledown v l r =
match l, r with
| Branch (vl, _, _), Branch (vr, l', r') when vr < min v vl -> Branch(vr, l, trickledown v l' r')
| Branch (vl, l', r'), _ when vl < v -> Branch(vl, trickledown v l' r', r)
| _ -> Branch(v, l, r)
// build a heap from the array a
let heapify a =
let rec buildfrom i =
if i < Array.length a then
trickledown a.[i] (buildfrom (2 * i + 1)) (buildfrom (2 * i + 2))
else
Leaf
buildfrom 0
// pop and rebuild the heap hp
let rec remove hp =
match hp with
| Branch (x, l, r) ->
let rfloat v l r =
match r with
| Branch (v', l', r') -> Branch(v', l, Branch(v, l', r'))
| _ -> Branch(v, l, r)
let lfloat v l r =
match l with
| Branch (v', l', r') -> Branch(v', Branch(v, l', r'), r)
| _ -> Branch(v, l, r)
let rec merge l r =
if height l = height r then
match r with
| Branch (v, l', r') -> rfloat v l (merge l' r')
| _ -> Leaf
else
match l with
| Branch (v, l', r') -> lfloat v (merge l' r') r
| _ -> Leaf
match merge l r with
| Branch (v, l', r') -> (x, trickledown v l' r')
| _ -> (x, Leaf)
| _ -> failwith "heap empty"
</code></pre>
<p>For simplification purposes, the height of a heap is recalculated using function <code>height</code>. In the original version, the heap is decorated with this information, as:</p>
<pre><code>type heap<'t> =
| Leaf
| Branch of int * 't * heap<'t> * heap<'t>
</code></pre>
<p>The pure functional implementation is not asymptotically less performant than Eytzinger's method (i.e. using array): the runtime complexity of <code>insert</code>, <code>remove</code>, etc. is still <code>O(lg n)</code>. But it may not profit from cache property as using array.</p> | 2022-07-18 14:45:38.350000+00:00 | 2022-07-18 14:51:44.790000+00:00 | 2022-07-18 14:51:44.790000+00:00 | null | 8,696,770 | <p>Are there any implementations of a purely functional standard binary heap? I know there are lots of interesting heaps eg: Binomial, leftist heap, they all have functional implementation, just wonder is there a way to implement standard binary heap or we have to use Array to implement it, because of the immutable type ? Thanks!</p> | 2012-01-02 01:30:56.137000+00:00 | 2022-07-18 14:51:44.790000+00:00 | 2012-01-02 02:23:03.563000+00:00 | haskell|functional-programming|ocaml | ['https://arxiv.org/abs/1312.4666'] | 1 |
16,414,885 | <p>You can look through the ideas described in this paper <a href="http://arxiv.org/abs/1312.4666" rel="nofollow">A Functional Approach to Standard Binary Heaps</a> or in this source <a href="https://github.com/vkostyukov/scalacaster/blob/master/src/heap/Heap.scala" rel="nofollow">Heap.scala</a>.</p> | 2013-05-07 08:47:31.610000+00:00 | 2014-01-15 17:14:54.703000+00:00 | 2014-01-15 17:14:54.703000+00:00 | null | 8,696,770 | <p>Are there any implementations of a purely functional standard binary heap? I know there are lots of interesting heaps eg: Binomial, leftist heap, they all have functional implementation, just wonder is there a way to implement standard binary heap or we have to use Array to implement it, because of the immutable type ? Thanks!</p> | 2012-01-02 01:30:56.137000+00:00 | 2022-07-18 14:51:44.790000+00:00 | 2012-01-02 02:23:03.563000+00:00 | haskell|functional-programming|ocaml | ['http://arxiv.org/abs/1312.4666', 'https://github.com/vkostyukov/scalacaster/blob/master/src/heap/Heap.scala'] | 2 |
52,409,424 | <p>You need to define your loss so that it does not consider the second portion of output if the first one is zero. If the image does not have a dot in it you may use an arbitrary number for the last 4 numbers, because if the first number is zero they are not considered in your loss. I hope this helps. </p>
<p>More readings: Object detection and the idea of anchor boxes in the Faster-RNN paper may help to understand how this might work: <a href="https://arxiv.org/pdf/1506.01497.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1506.01497.pdf</a></p> | 2018-09-19 15:30:35.120000+00:00 | 2018-09-19 15:30:35.120000+00:00 | null | null | 52,402,017 | <p>For example i want to train tensorflow model which has 2 outputs. If first output is 1 then i look at the second output, but if the first output is 0 then second output doesn't matter. Is there a way in tensorflow to set the error on second output to 0 when the first output is 0 or I have to specify all the outputs. Sorry if that is dumb question but I'm new to tensorflow.</p>
<p>Better example. I want to check if there is a dot in the feeded image. My model has 5 outputs. First one predict if there is a dot in the image(values from 0 to 1).The next 4 outputs shows where is that dot in the image (position, width and height). So if I feed a model with an image without dot what should i put in the output. [0,anything,anything,anything,anything] or [0,0,0,0,0]. And if the first one, how to do it.</p> | 2018-09-19 09:00:35.360000+00:00 | 2018-09-19 15:30:35.120000+00:00 | 2018-09-19 10:51:09.403000+00:00 | python|tensorflow|machine-learning | ['https://arxiv.org/pdf/1506.01497.pdf'] | 1 |
61,051,069 | <p>I believe what you're describing is an anomaly detection model. Other ML models exist for this purpose, such as the one class support vector machine (<a href="https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html</a>) and isolation forest (<a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html</a>). It's possible to implement a neural network, but you will need to have a customized loss function - as in, binary cross-entropy doesn't make sense for this application. One example of such a loss function is described here: <a href="https://arxiv.org/pdf/1802.06360.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1802.06360.pdf</a> which is based on the one class SVM. </p>
<p>I have an implementation of a one class fully connected network here in Keras: <a href="https://github.com/danielenricocahall/One-Class-NeuralNetwork" rel="nofollow noreferrer">https://github.com/danielenricocahall/One-Class-NeuralNetwork</a> which utilizes a loss function based on the one described in that paper, if that helps.</p>
<p>Good luck!</p> | 2020-04-05 23:58:32.940000+00:00 | 2020-04-05 23:58:32.940000+00:00 | null | null | 61,048,870 | <p>Can I build a cnn in keras with only one class (class - 0) so it can predict if the given date belongs to this class?
Thanks in advance</p>
<p>Edite :Thanks for the answer and comments so far. My data is acceleration time series from a healthy structure but I don't have access to damaged state acceleration signals, so I have only data for class 0. </p> | 2020-04-05 20:13:30.043000+00:00 | 2020-04-06 08:28:28.943000+00:00 | 2020-04-06 08:28:28.943000+00:00 | keras|conv-neural-network|one-class-classification | ['https://scikit-learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html', 'https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html', 'https://arxiv.org/pdf/1802.06360.pdf', 'https://github.com/danielenricocahall/One-Class-NeuralNetwork'] | 4 |
61,099,202 | <p>It depends more on the size of the objects you want to detect or in other words, the size of the receptive field you want to have. Nevertheless, choosing the kernel size was always a challenging decision. That is why the Inception model was created which uses different kernel sizes (1x1, 3x3, 5x5). The creators of this model also went deeper and tried to decompose the convolutional layers into ones with smaller patch size while maintaining the same receptive field to try to speed up the training (ex. 5x5 was decomposed to two 3x3 and 3x3 was decomposed to 3x1 and 1x3) creating different versions of the inception model.</p>
<p>You can also check the Inception V2 paper for more details <a href="https://arxiv.org/abs/1512.00567" rel="nofollow noreferrer">https://arxiv.org/abs/1512.00567</a></p> | 2020-04-08 11:03:46.960000+00:00 | 2020-04-08 11:03:46.960000+00:00 | null | null | 45,538,980 | <p>I am playing around convolutional neural networks at home with tensorflow (btw I have done the udacity deep learning course, so I have the theory basis). <strong>What impact has the size of the patch when one runs a convolution? does such size have to change when the image is bigger/smaller?</strong></p>
<p>One of the exercises I did involved the CIFAR-10 databaese of images (32x32 px), then I used convolutions of 3x3 (with a padding of 1), getting decent results. </p>
<p>But lets say now I want to play with images larger than that (say 100x100), should I make my patches bigger? Do I keep them 3x3? Furthermore, what would be the impact of making a patch really big? (Say 50x50). </p>
<p>Normally I would test this at home directly, but running this on my computer is a bit slow (no nvidia GPU!)</p>
<p>So the question should be summarized as </p>
<ol>
<li>Should I increase/decrease the size of my patches when my input images are bigger/smaller?</li>
<li><strong>What is the impact (in terms of performance/overfitting) of increasing/decreasing my path size?</strong></li>
</ol> | 2017-08-07 03:21:37.377000+00:00 | 2020-04-08 11:03:46.960000+00:00 | null | machine-learning|neural-network|conv-neural-network|image-recognition | ['https://arxiv.org/abs/1512.00567'] | 1 |
64,223,512 | <p>I solve the problem using different solutions but I found that the encoding is the best solution for my problem</p>
<ul>
<li>Select the model with pre-estimate maximum state space and If the
state space is less than the maximum state, we padded the state
space with zeros</li>
<li>Consider only the state of the agents itself without any sharing of
the other state.</li>
<li>As the paper <code>[1]</code> mentioned that the extra connected autonomous
vehicles (CAVs) are not included in the state and if they are less
than the max CAVs, the state is padded with zeros. We can select how
many agents that we can share their state adding to the agent’s
state.</li>
<li><strong>Encode the state</strong> where it will help us to process the input and compress the information into a fixed length. In the encoder, every
cell in the LSTM layer or RNN with Gated Recurrent Units (GRU)
returns a hidden state (Ht) and cell state (E’t).</li>
</ul>
<p><a href="https://i.stack.imgur.com/drqMk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/drqMk.png" alt="enter image description here" /></a></p>
<p>For the encoder, I use the <a href="https://www.tensorflow.org/tutorials/text/nmt_with_attention" rel="nofollow noreferrer">Neural machine translation with attention code</a></p>
<pre><code>class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
</code></pre>
<ul>
<li><strong>LSTM zero paddings and mask</strong> where we pad the state with a special value to be masked (skipped) later. If we pad without masking, the
padded value will be regarded as actual value, thus, it becomes noise
in the state [2-4].</li>
</ul>
<p>1- Vinitsky, E., Kreidieh, A., Le Flem, L., Kheterpal, N., Jang, K., Wu, C., ... & Bayen, A. M. (2018, October). Benchmarks for reinforcement learning in mixed-autonomy traffic. In Conference on Robot Learning (pp. 399-409)</p>
<p>2- Kochkina, E., Liakata, M., & Augenstein, I. (2017). Turing at semeval-2017 task 8: Sequential approach to rumour stance classification with branch-lstm. arXiv preprint arXiv:1704.07221.</p>
<p>3- Ma, L., & Liang, L. (2020). Enhance CNN Robustness Against Noises for Classification of 12-Lead ECG with Variable Length. arXiv preprint arXiv:2008.03609.</p>
<p>4- <a href="https://datascience.stackexchange.com/a/48814">How to feed LSTM with different input array sizes?</a></p>
<p>5- Zhao, X., Xia, L., Zhang, L., Ding, Z., Yin, D., & Tang, J. (2018, September). Deep reinforcement learning for page-wise recommendations. In Proceedings of the 12th ACM Conference on Recommender Systems (pp. 95-103).</p> | 2020-10-06 09:59:07.937000+00:00 | 2021-02-06 12:22:21.350000+00:00 | 2021-02-06 12:22:21.350000+00:00 | null | 63,728,800 | <p>I'm working in <strong>A2C</strong> reinforcement learning where my environment has an increasing and decreasing in the number of agents. As a result of the increasing and decreasing the number of agents, the state space will also change. I have tried to solve the problem of changing the state space this way:</p>
<ul>
<li><p>If the state space exceeds the maximum state space that selected
as <code>n_input</code>, the excess state space will be selected by
<code>np.random.choice</code> where random choice provides a way of creating random samples from the state space after converting the state space into probabilities.</p>
</li>
<li><p>If the state space is less than the maximum state I padded the state
space with zeros.</p>
<pre><code>def get_state_new(state):
n_features = n_input-len(get_state(env))
# print("state",len(get_state(env)))
p = np.array(state)
p = np.exp(p)
if p.sum() != 1.0:
p = p * (1. / p.sum())
if len(get_state(env)) > n_input:
statappend = np.random.choice(state, size=n_input, p=p)
# print(statappend)
else:
statappend = np.zeros(n_input)
statappend[:state.shape[0]] = state
return statappend
</code></pre>
</li>
</ul>
<p>It works but the results are not as expected and I don't know if this correct or not.</p>
<p><strong>My question</strong></p>
<p>Are there any reference papers that deal with such a problem and how to deal with the changing of state space?</p> | 2020-09-03 17:27:14.273000+00:00 | 2021-02-06 12:22:21.350000+00:00 | 2020-09-05 17:49:38.220000+00:00 | python|tensorflow|reinforcement-learning | ['https://i.stack.imgur.com/drqMk.png', 'https://www.tensorflow.org/tutorials/text/nmt_with_attention', 'https://datascience.stackexchange.com/a/48814'] | 3 |
63,692,464 | <p>If your class looks like this:</p>
<pre class="lang-cpp prettyprint-override"><code>struct Person {
double age;
double income;
size_t location;
};
</code></pre>
<p>then you <em>might</em> benefit from rearranging to</p>
<pre class="lang-cpp prettyprint-override"><code>std::vector<double> ages;
std::vector<double> incomes;
std::vector<size_t> locations;
</code></pre>
<p>But it depends on your access patterns. If you frequently access multiple elements of a person at a time, then having the elements blocked together makes sense.</p>
<p>If your class looks like this:</p>
<pre class="lang-cpp prettyprint-override"><code>struct Population {
std::vector<double> many_ages;
std::vector<double> many_incomes;
std::vector<size_t> many_locations;
};
</code></pre>
<p>Then you're using the form your resource recommended. Using any one of these arrays individually is faster than using the first class, but using elements from all three arrays simultaneously is probably slower with the second class.</p>
<p>Ultimately, you should structure your code to be as clean and intuitive as a possible. The biggest source of speed will be a strong understanding and appropriate use of algorithms, not the memory layout. I recommend disregarding this unless you already have strong HPC skills and need to squeeze maximum performance from your machine. In almost every other case your development time and sanity is worth far more than saving a few clock cycles.</p>
<p><strong>More broadly</strong></p>
<ol>
<li><p>An interesting paper related to this is <a href="https://arxiv.org/abs/1903.03129" rel="nofollow noreferrer">SLIDE: In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems</a>. A lot of work has gone into mapping ML algorithms to GPUs and, for ML applications, getting the memory layout right does make a real difference since so much time is spent on training and GPUs are optimized specifically for contiguous-array processing. But, the authors of the paper contend that even here if you understand algorithms well you can beat specialized hardware with optimized memory layouts, and they demonstrate this by getting their CPU to train 3.5x faster than their GPU.</p>
</li>
<li><p>More broadly, your question deals with the idea of <a href="https://en.wikipedia.org/wiki/CPU_cache#Cache_miss" rel="nofollow noreferrer">cache misses</a>. Since a cache miss is 200x more expensive than an L1 reference (<a href="https://gist.github.com/jboner/2841832" rel="nofollow noreferrer">link</a>), if your data layout is optimized to your computation, then you can really save time. <strong>However</strong>, as the above suggests, it is rarely the case that simply rearranging your data magically makes everything faster. Consider matrix multiplication. It's the perfect example because the data is laid out in a single array, as requested by your resource. However, for a simple triple-loop matmult GEMM implementation there are still 6 ways to arrange your loops. Some of these ways are much more efficient than others, but none of them give you anywhere near peak performance. Read through <a href="https://github.com/r-barnes/how-to-optimize-gemm" rel="nofollow noreferrer">this step-by-step explanation of matmult</a> to get a better sense of all the algorithmic optimizations necessary to get good performance.</p>
</li>
</ol>
<p>What the above should demonstrate is that even for situations in which we have only a few arrays laid out exactly as your resource suggests, the layout alone doesn't give us the speed. Good algorithms do. Data layout considerations, if any, flow from the algorithms we choose and higher-level hardware constraints.</p>
<p>If this is so for simple arrays and operations like matrix multiplication, by extension you should also expect it to be so for "fancy data structures" as well.</p> | 2020-09-01 17:15:02.973000+00:00 | 2020-09-01 17:42:52.117000+00:00 | 2020-09-01 17:42:52.117000+00:00 | null | 63,692,267 | <p>According to <a href="https://www.quora.com/How-do-I-learn-C-tricks-for-high-performance-computing" rel="nofollow noreferrer">this Quora forum</a>,</p>
<blockquote>
<p>One of the simplest rules of thumb is to remember that hardware loves arrays, and is highly optimized for iteration over arrays. A simple optimization for many problems is just to stop using fancy data structures and just use plain arrays (or std::vectors in C++). This can take some getting used to.</p>
</blockquote>
<p>Are C++ classes one of those "fancy data structures," i.e. a kind of data type that can be replaced by arrays to achieve a higher performance in a C++ program?</p> | 2020-09-01 17:01:03.160000+00:00 | 2020-09-22 07:55:02.650000+00:00 | 2020-09-01 17:07:04.497000+00:00 | c++|hpc | ['https://arxiv.org/abs/1903.03129', 'https://en.wikipedia.org/wiki/CPU_cache#Cache_miss', 'https://gist.github.com/jboner/2841832', 'https://github.com/r-barnes/how-to-optimize-gemm'] | 4 |
32,335,078 | <p>The full <code>fastica.m</code> output is:</p>
<pre><code>[icasig, A, W] = fastica(X); % outputs the estimated separating
matrix W and the corresponding mixing matrix A.
</code></pre>
<p>For a given data matrix <code>X</code> (assume <code>X</code> is a d x N matrix, i.e. each row is an observation), think about the following relationships that correspond to the output of <code>[Y, A, W] = fastica(X).</code></p>
<p><strong>Unmixing</strong>: <code>Y = W*X % X is separated on independent 'sources' Y</code> </p>
<p><code>W</code> is a k x d matrix (k<=d) that separates the sources or independent components from the mixtures in X (d x N) and stores them in Y (k x N). If k is smaller than d, the resulting representation is of reduced dimensionality.</p>
<p><strong>Mixing</strong>: <code>X = A*Y % Y sources are combined through the mixing matrix A</code></p>
<p><code>A</code> is an d x k matrix that mixes k components (k<=d) stored in matrix <code>Y</code> which is k x N (for N observations). <em>A stores the independent components extracted from X, i.e. d-dimensional vectors, k number in total</em>. Moreover <code>Y</code> stores the sources or <em>the projections of X on the independent components</em>. </p>
<p>From the output of <code>fastica</code> check the values of the following norms (the smaller the norms, the more accurate the separation of the ICA algorithm):</p>
<pre><code>norm(W*X - Y)
norm(X - A*Y)
norm(pinv(A) - W)
norm(pinv(W) - A)
</code></pre>
<p><strong>Regarding the OP questions:</strong></p>
<ul>
<li><code>icasig</code> represents the sources responsible of the data or the <em>projection</em> of the input data on the ICs estimated from <code>train_data</code>. In this case there are k ICs discovered, so 192 points are represented by their k values.</li>
<li>The ICs are given by the vectors in the columns of A. </li>
<li><code>icasig</code> already gives you the projection of the original data on the ICs.</li>
<li>You can use the separating matrix W (close to the pseudo-inverse of W) to project a new set of points <code>Xn</code> by <code>Yn = W*Xn</code>. </li>
</ul>
<p><strong>Example:</strong></p>
<pre><code>% Generate data (N=1000) from distribution
X = gendata(1000);
% Estimate ICs and projections of X
[Y, A, W] = fastica(X, 'approach', 'defl');
% New points from the same distribution
Xn = gendata(50);
% Project new point on ICA estimated independent components
Yn = W*Xn;
</code></pre>
<p><a href="https://i.stack.imgur.com/qj0Eo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qj0Eo.png" alt="enter image description here"></a></p>
<p>(Replace <code>gendata</code> with your own data generating function. For the plot I used the bimodal set, where the mixing is rotation in 2D, from J. Shlens, <a href="http://arxiv.org/abs/1404.2986" rel="nofollow noreferrer">A Tutorial on Independent Component Analysis</a>) </p> | 2015-09-01 15:08:10.277000+00:00 | 2015-09-01 15:26:45.247000+00:00 | 2015-09-01 15:26:45.247000+00:00 | null | 32,212,968 | <p>I'm having a question regarding ICA ,it maybe a little basic but I'm new to it. I'm using <a href="http://research.ics.aalto.fi/ica/fastica/" rel="nofollow">FastICA MATLAB toolbox</a></p>
<p>I'm using it as : </p>
<pre><code>[icasig] = fastica(train_data);
</code></pre>
<p>Where <code>train_data</code> is of size <code>[192x23]</code>.</p>
<p>What I understand is : <code>icasig</code> is supposed to be the independent components, so I was expecting that the size of it would be <code>23x23</code> ,the dimension number, like <code>PCA</code> output. Instead the size is <code>22x192</code>, where the dimensions are reduced to 22.
I don't understand what this represents.</p>
<p>So, my question is: does <code>icasig</code> represent the ICs? Then, if this is the case, how to use it to project the original data on the ICs?</p>
<p>If <code>icasig</code> represents the projection of original data on the ICs, how can I extract the ICs themselves to be used in another projection of the testdata?</p>
<p>Thanks a lot for your help.</p> | 2015-08-25 19:58:38.180000+00:00 | 2015-09-01 15:26:45.247000+00:00 | 2015-08-25 20:53:01.700000+00:00 | matlab|machine-learning|signal-processing | ['https://i.stack.imgur.com/qj0Eo.png', 'http://arxiv.org/abs/1404.2986'] | 2 |
49,531,019 | <p>There is a big difference between <code>tf.nn.batch_normalization</code> and <code>tf.layers.batch_normalization</code>. See <a href="https://stackoverflow.com/questions/48949318/what-is-the-difference-between-the-tensorflow-batch-normalization-implementation/48953548#48953548">my answer here</a>. So you have made the right choice by using the <code>layers</code> version. Now, on your questions:</p>
<ol>
<li><code>renorm_momentum</code> only has an effect is you use <a href="https://arxiv.org/abs/1702.03275" rel="nofollow noreferrer">batch renormalization</a> by setting the <code>renorm</code> argument to <code>True</code>. You can ignore this if using default batch normalization.</li>
<li>Short answer: You can literally copy that code snippet. Put it exactly where you would normally call <code>optimizer.minimize</code>.</li>
</ol>
<p>Long answer on 2.: Batch normalization has two "modes": Training and inference. During training, mean and variance of the current minibatch is used. During inference, this is not desirable (e.g. you might not even use batches as input, so there would be no minibatch statistics). For this reason, moving averages over minibatch means/variances are kept during training. These moving averages are then used for inference.<br>
By default, Tensorflow only executes what it needs to. Those moving averages are not needed for training, so they normally would never be executed/updated. The <code>tf.control_dependencies</code> context manager forces Tensorflow to do the updates every time it computes whatever is in the code block (in this case the cost). Since the cost certainly needs to be computed exactly one per training step, this is a good way of making sure the moving averages are updated.</p>
<p>The code example seems a bit arcane, but in context it would really just be (as an example):</p>
<pre><code>loss = ...
train_step = SomeOptimizer().minimize(loss)
with tf.Session() as sess:
....
</code></pre>
<p>becomes</p>
<pre><code>loss = ...
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_step = SomeOptimizer().minimize(loss)
with tf.Session() as sess:
....
</code></pre>
<p>Finally, keep in mind to use the correct <code>training</code> argument for batch normalization so that either minibatch statistics or moving averages are used as intended.</p> | 2018-03-28 09:35:50.097000+00:00 | 2018-03-28 09:35:50.097000+00:00 | null | null | 49,528,440 | <p>I want to replicate a network build with the lasagne-library in tensor flow. I'm having some trouble with the batch normalization.
This is the lasagne documentation about the used batch normalization:
<a href="http://lasagne.readthedocs.io/en/latest/modules/layers/normalization.html?highlight=batchNorm" rel="nofollow noreferrer">http://lasagne.readthedocs.io/en/latest/modules/layers/normalization.html?highlight=batchNorm</a></p>
<p>In tensorflow I found two functions to normalize:</p>
<ol>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization</a></li>
<li><a href="https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization</a></li>
</ol>
<p>The first one is simpler but does not let me choose the alpha parameter from lasagne (Coefficient for the exponential moving average of batch-wise means and standard deviations computed during training). I tried using the second function, which has a lot more options, but there are two things I do not understand about it:</p>
<ol>
<li>I am not clear about the difference between momentum and renorm_momentum. If I have a alpha of 0.9 in the lasagne network, can I just set both tensorflow momentums to 0.9 and expect the same behaviour?</li>
<li>The tf documentation notes: </li>
</ol>
<p>when training, the moving_mean and moving_variance need to be updated. By default the update ops are placed in tf.GraphKeys.UPDATE_OPS, so they need to be added as a dependency to the train_op. For example:</p>
<pre><code> update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss)
</code></pre>
<p>I do not really understand what is happening here and where I need to put something similar in my code. Can I just put this somewhere before I run the session? What parts of this code piece should I not copy literally but change depending on my code?</p> | 2018-03-28 07:23:02.647000+00:00 | 2018-03-28 09:35:50.097000+00:00 | null | tensorflow|neural-network|batch-normalization | ['https://stackoverflow.com/questions/48949318/what-is-the-difference-between-the-tensorflow-batch-normalization-implementation/48953548#48953548', 'https://arxiv.org/abs/1702.03275'] | 2 |
65,861,591 | <p>To better understand the original concept of genetic drift (Biology), I suggest you read <a href="https://www.khanacademy.org/science/ap-biology/natural-selection/population-genetics/a/genetic-drift-founder-bottleneck#:%7E:text=Genetic%20drift%20is%20a%20mechanism,are%20strongest%20in%20small%20populations." rel="nofollow noreferrer">this</a> Khan Academy's article. Simply put, you can think of it as an evolutionary phenomenon in which the frequency of one or more alleles (versions of a gene) in a population changes due to random factors (unrelated to the fitness of each individual). If the fittest individual of a population is struck, out of bad luck, by a lightning and dies before reproducing, he won't leave offspring (although he has the highest fitness!). This is an example (somewhat absurd, I know) of genetic drift.</p>
<p>Now, in the specific context of evolutionary algorithms, <a href="https://arxiv.org/abs/1906.08870" rel="nofollow noreferrer">this</a> paper provides a good summary on the subject:</p>
<blockquote>
<p>EAs genetic drift can be as a result of a combination of factors,
primarily related to selection, fitness function and representation.
It happens by unintentional loss of genotypes. For example, random
chance that a good genotype solution never gets selected for
reproduction. Or, if there is a ‘lifespan’ to a solution and it dies
before it can reproduce. Normally such a genotype only resides in the
population for a limited number of generations.</p>
<p>(Sloss & Gustafson, 2019)</p>
</blockquote>
<p>Finally, I will give you a real example of genetic drift acting on a genetic algorithm. Recently, I've used a simple neuroevolution algorithm to create an agent capable of playing the Snake game (<a href="https://github.com/Talendar/neuroevolutionary_snake" rel="nofollow noreferrer">GitHub repo</a>). In my implementation of the game, the apples appear in random positions of the screen. When executing the evolutionary process for the first time, I noticed a big fluctuation in the population's best fitness between consecutive generations - overall, it wasn't improving much. Because of this, my algorithm was unable to converge to a good solution.</p>
<p>After some debugging, I found out that this was being caused by genetic drift. Because the apples spawned in random positions, some individuals, not necessarily the fittest, were lucky and got "easy apples", thus achieving a high fitness and leaving more offspring. Do you see the problem here?</p>
<p>Suppose that snake A is better at the game than snake B, because it can move towards the food, while B only moves randomly. Now, suppose that the first food that appeared for snake A was in a corner of the screen (a difficult position) and A died shortly after eating the apple. Now, suppose that snake B was lucky enough to have 3 apples spawn in a row, one after the other. Although B is "dumber" than A, it will leave more offspring, because it achieved a greater fitness. B's offspring will "pollute" the next generation, because they'll probably be "dumb" like B.</p>
<p><a href="https://www.youtube.com/watch?v=iPUVPpUCf1g&ab_channel=Talendar" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kCxRQ.gif" height="350"></a></p>
<p>I solved the problem using a better apple positioning algorithm (I defined a minimum distance between the spawning position of two consecutive apples) and by calculating each individual's final fitness as the average of its fitness in several playing sessions. This greatly reduced (although it did not eliminate) the interference of genetic drift in my algorithm.</p>
<p>I hope this helps. You can also take a look at <a href="https://www.youtube.com/watch?v=iPUVPpUCf1g&ab_channel=Talendar" rel="nofollow noreferrer">this</a> video (it's in Portuguese, but English subtitles are available), where I explain some of the strategies I used to make the Snake AI.</p> | 2021-01-23 16:28:24.410000+00:00 | 2021-01-23 16:28:24.410000+00:00 | null | null | 65,618,327 | <p>I have read in some articles on evolutionary computing that the algorithms generally converge to a single solution due to the phenomenon of genetic drift. There is a lot of content on the Internet, but I can't get a deep understanding of this concept. I need to know, simply and precisely:</p>
<ul>
<li>What is genetic drift in the context of evolutionary computing?</li>
<li>How does it affect the convergence of an evolutionary algorithm?</li>
</ul> | 2021-01-07 18:53:36.100000+00:00 | 2021-01-24 06:12:45.630000+00:00 | 2021-01-24 06:12:45.630000+00:00 | artificial-intelligence|genetic-algorithm|evolutionary-algorithm|genetics | ['https://www.khanacademy.org/science/ap-biology/natural-selection/population-genetics/a/genetic-drift-founder-bottleneck#:%7E:text=Genetic%20drift%20is%20a%20mechanism,are%20strongest%20in%20small%20populations.', 'https://arxiv.org/abs/1906.08870', 'https://github.com/Talendar/neuroevolutionary_snake', 'https://www.youtube.com/watch?v=iPUVPpUCf1g&ab_channel=Talendar', 'https://www.youtube.com/watch?v=iPUVPpUCf1g&ab_channel=Talendar'] | 5 |
52,511,367 | <p>I got the answers from <em>Cleverhans</em> developers on <a href="https://github.com/tensorflow/cleverhans/issues/589" rel="nofollow noreferrer">github</a>, I quote their answer here:</p>
<p><strong>Chapter 1:</strong></p>
<p>FGSM (like any attack) is not guaranteed to find an adversarial image that is misclassified by the model because it makes approximations when solving the optimization problem that defines an adversarial example. </p>
<p>The attack can fail to find adversarial images for various reasons, one common reason is gradient masking. You can read about it in this <a href="http://www.cleverhans.io/security/privacy/ml/2017/02/15/why-attacking-machine-learning-is-easier-than-defending-it.html" rel="nofollow noreferrer">blog pos</a>t and in this <a href="https://arxiv.org/abs/1602.02697" rel="nofollow noreferrer">paper</a> as well as this <a href="https://arxiv.org/abs/1802.00420" rel="nofollow noreferrer">paper</a>.</p>
<p>The <em>eps</em> step is important because it is the magnitude of the perturbation. The attack first computes the direction in which to perturb the image (using gradients of the model) and then takes a step of size <em>eps</em> in that direction. Hence, <em>eps</em> corresponds roughly to what intuitively one would think of the "power" of the attack.</p>
<p>You can find a multi-step variant of FGSM in <code>BasicIterativeMethod</code></p>
<p><strong>Chapter 2:</strong> </p>
<p><em>y</em> is used to specify labels in the case of an untargeted attack (any wrong class is considered as success for the adversary) whereas <em>y_target</em> is used to specify a target class in the targeted attack case (the adversary is successful only if the model makes a particular misclassification in a chosen class). </p>
<p>It is often the case that targeted attacks require more perturbation (i.e., higher <em>eps</em> values in the FGSM case) than untargeted attacks.</p> | 2018-09-26 06:35:02.480000+00:00 | 2018-09-26 06:35:02.480000+00:00 | null | null | 52,501,833 | <p>I have a keras model (CNN with final softmax) that is an RGB image classifier. Output of the model are 5 possible categories for input images (one-hot encoded).
I'm trying to generate adversarial images for my keras model with the <em>Cleverhans</em> (<a href="https://github.com/tensorflow/cleverhans" rel="nofollow noreferrer">tensorflow library</a>).</p>
<p>A simplified version of my code which generates one adversarial image is the following:</p>
<pre><code># model is the CNN keras model
wrap = KerasModelWrapper(model)
fgsm = FastGradientMethod(wrap, sess=session)
fgsm_params = {'eps': 16. / 256,
'clip_min': 0.,
'clip_max': 1.
}
x = tf.placeholder(tf.float32, shape=(None, img_rows, img_cols,
nchannels))
adv_x = fgsm.generate(x, **fgsm_params)
# original image is a tensor containing only one RGB image, shape=(1,48,48,3)
adv_image = adv_x.eval(session=session, feed_dict={x: original_image})
</code></pre>
<p><strong>Chapter 1, eps</strong></p>
<p>From my understanding, <em>'eps'</em> FGM param is the input variation step (minimum change for one image value/pixel).</p>
<p>I have observed that the final outcome is highly affected by <em>eps</em>, sometimes I need high <em>eps</em> in order to obtain an effective adversarial image, an image which effectively changes the category label in respect to the original image. </p>
<p>With low <em>eps</em> sometimes FGM fails to obtain a working adversarial image i.e., having an image O, with label l<sub>O</sub> FGM fails to produce adversarial image O' with l<sub>O'</sub>!= l<sub>O</sub>, e.g., for l<sub>O</sub> = [0,0,1,0,0] we still obtain l<sub>O'</sub> = [0,0,1,0,0], failing to generate an adversarial image with a different label.</p>
<p>Questions (I'm sorry the problem requires a set of questions):</p>
<ul>
<li>Does FGM always find out a working adversarial image? i.e., Is it normal that FGM fails?</li>
<li>Is there a way to obtain an estimated quality of the generated adversarial image (without predicting with model)?</li>
<li>Why is the value of eps step so important?</li>
<li>Most important: <strong>Is there a way to tell FGM to try harder searching for the adversarial image(e.g, more steps)?</strong></li>
</ul>
<p><strong>Chapter 2, y,y_target</strong></p>
<p>I have also experimented <em>y</em> and <em>y_target</em> params.
Can you also explain me what are the params <code>'y'</code>, <code>'y_target'</code>? </p>
<p>I thought <code>'y_target'</code> tells that we want to generate an adversarial image that targets a specific category.
For example I thought that <code>y_target = [[0,1,0,0,0]]</code> in <code>feed_dict</code> should force to generate an adversarial image which is classified with the 2th class from the model.</p>
<ul>
<li>Am I right? ..or </li>
<li>do I miss something?</li>
</ul>
<p>P.s: my problem is that setting y_target fails to produce adversarial images.</p>
<p>please give me few tips.. ;-)
Regards</p> | 2018-09-25 15:30:46.563000+00:00 | 2018-09-26 06:35:02.480000+00:00 | 2018-09-25 15:39:39.480000+00:00 | python|tensorflow|keras|gradient-descent|multiclass-classification | ['https://github.com/tensorflow/cleverhans/issues/589', 'http://www.cleverhans.io/security/privacy/ml/2017/02/15/why-attacking-machine-learning-is-easier-than-defending-it.html', 'https://arxiv.org/abs/1602.02697', 'https://arxiv.org/abs/1802.00420'] | 4 |
23,327,816 | <p>It looks like an efficient algorithm is going to be hard:</p>
<p>From <a href="http://en.wikipedia.org/wiki/Separable_state#Separability_criterion" rel="nofollow noreferrer">wikipedia</a>:</p>
<blockquote>
<p>The problem of deciding whether a state is separable in general is
sometimes called the separability problem in quantum information
theory. It is considered to be a difficult problem. It has been shown
to be NP-hard.</p>
<p>Gurvits, L., Classical deterministic complexity of Edmonds’ problem and quantum entanglement, in Proceedings of the 35th ACM Symposium on Theory of Computing, ACM Press, New York, 2003.</p>
<p>Sevag Gharibian, Strong NP-Hardness of the Quantum Separability Problem, Quantum Information and Computation, Vol. 10, No. 3&4, pp. 343-360, 2010. arXiv:0810.4507</p>
</blockquote> | 2014-04-27 19:17:27.747000+00:00 | 2014-04-27 19:17:27.747000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 23,326,921 | <p>I'm looking for algorithms that take an arbitrary quantum state made up of a sum of weighted classical states made up of bits, like this:</p>
<pre><code>|0000>/2 - |0011>/2 + |0100>/2 - |0111>/2
</code></pre>
<p>and factor it into a more compact form using tensor products, like this:</p>
<pre><code>|0> x (|0> + |1>) x (|00> - |11>) / 2
</code></pre>
<p>I want to use the algorithm as a way of visualizing/simplifying the state of a (simulated) quantum circuit.</p>
<p>For individual qubits I know I can just pair all the states with the state where the bit is flipped and check that every pair has the same x:y relation between the states. In the example above, flipping the second bit always gives you a state with a 1:1 weighting, so the second bit factors out as (1|0> + 1|1>).</p>
<p>But extending that approach to detect entangled bits (like the third and fourth in the example) causes it to take at least <code>Ω(n^c)</code> time (probably more, I haven't thought it all the way through), where <code>n</code> is the number of states and <code>c</code> is the number of entangled bits. Since <code>n</code> is already growing exponentially with the number of bits that's... not ideal.</p>
<p>Are there better algorithms? Representations easier to factor from/to? How useful is changing the basis? Links to papers would be great.</p> | 2014-04-27 17:55:19.700000+00:00 | 2014-04-27 20:05:06.750000+00:00 | 2014-04-27 20:05:06.750000+00:00 | algorithm|factorization|quantum-computing | ['http://en.wikipedia.org/wiki/Separable_state#Separability_criterion'] | 1 |
53,403,228 | <p>The correct is to consider an iteration as a batch.
In the original <a href="https://arxiv.org/pdf/1701.07875.pdf" rel="nofollow noreferrer">paper</a>, for each iteration of the critic/discriminator they are sampling a batch of size <code>m</code> of the real data and a batch of size <code>m</code> of prior samples <code>p(z)</code> to work it. After the critic is trained over <code>Diters</code> iterations, they train the generator which also starts by the sampling of a batch of prior samples of <code>p(z)</code>.
Therefore, each iteration is working on a batch. </p>
<p>In the <a href="https://github.com/martinarjovsky/WassersteinGAN/blob/master/main.py" rel="nofollow noreferrer">official implementation</a> this is also happening. What may be confusing is that they use the variable name <code>niter</code> to represent the number of epochs to train the model. Although they use a different scheme to set <code>Diters</code> at lines <a href="https://github.com/martinarjovsky/WassersteinGAN/blob/f81eafd2aa41e93698f203732f8f395abc70be02/main.py#L162" rel="nofollow noreferrer">162</a>-166:</p>
<pre><code># train the discriminator Diters times
if gen_iterations < 25 or gen_iterations % 500 == 0:
Diters = 100
else:
Diters = opt.Diters
</code></pre>
<p>they are, as in the paper, training the critic over <code>Diters</code> batches.</p> | 2018-11-20 23:36:46.487000+00:00 | 2018-11-20 23:36:46.487000+00:00 | null | null | 53,401,431 | <p>I'm running a DCGAN-based GAN, and am experimenting with WGANs, but am a bit confused about how to train the WGAN.</p>
<p>In the official <a href="https://github.com/martinarjovsky/WassersteinGAN/blob/master/main.py" rel="nofollow noreferrer">Wasserstein GAN PyTorch implementation</a>, the discriminator/critic is said to be trained <code>Diters</code> (usually 5) times per each generator training.</p>
<p>Does this mean that the critic/discriminator trains on <code>Diters</code> <em>batches</em> or the <em>whole dataset</em> <code>Diters</code> times? If I'm not mistaken, the official implementation suggests the discriminator/critic is trained on the <em>whole dataset</em> <code>Diters</code> times, but other implementations of WGAN (in PyTorch and TensorFlow etc.) do the opposite.</p>
<p>Which is correct? <a href="https://arxiv.org/pdf/1701.07875.pdf" rel="nofollow noreferrer">The WGAN paper</a> (to me, at least), indicates that it is <code>Diters</code> <em>batches</em>. Training on the whole dataset is obviously orders of magnitude slower.</p>
<p>Thanks in advance!</p> | 2018-11-20 20:58:56.710000+00:00 | 2020-01-13 04:01:44.330000+00:00 | null | python-3.x|tensorflow|machine-learning|deep-learning|pytorch | ['https://arxiv.org/pdf/1701.07875.pdf', 'https://github.com/martinarjovsky/WassersteinGAN/blob/master/main.py', 'https://github.com/martinarjovsky/WassersteinGAN/blob/f81eafd2aa41e93698f203732f8f395abc70be02/main.py#L162'] | 3 |
55,925,634 | <p>Unfortunately, I don't think it ever got officially documented, but PCL has a command line application to report the <a href="https://en.wikipedia.org/wiki/Hausdorff_distance" rel="nofollow noreferrer">Hausdorff distance</a> between two clouds. Try running <code>pcl_compute_hausdorff</code>. It's also available in the PDAL library (<a href="https://pdal.io/apps/hausdorff.html" rel="nofollow noreferrer">https://pdal.io/apps/hausdorff.html</a>), where you would instead run <code>pdal hausdorff</code>.</p>
<p>Another common one is Chamfer distance (as described in <a href="https://arxiv.org/abs/1612.00603" rel="nofollow noreferrer">https://arxiv.org/abs/1612.00603</a>), though I'm not immediately aware of an implementation.</p> | 2019-04-30 17:00:38.823000+00:00 | 2019-04-30 17:00:38.823000+00:00 | null | null | 55,913,968 | <p>What are some metrics or methods that are used widely to compare similarity of two point clouds objects ? ( Ex. It could be PCD file or PLY file).</p>
<p>I have searched in PCL library's document but not found. Googled it, found some research but they talk about new method not what is widely or already used.</p>
<p>Is there any basic method to compare similarity of point clouds ? Or even some function in PCL library that will do the job ?</p> | 2019-04-30 04:25:15.630000+00:00 | 2021-02-03 02:26:03.740000+00:00 | null | point-cloud-library|point-clouds | ['https://en.wikipedia.org/wiki/Hausdorff_distance', 'https://pdal.io/apps/hausdorff.html', 'https://arxiv.org/abs/1612.00603'] | 3 |
59,929,314 | <p>Transfer learning is especially interesting for the accuracy if you don't have enough data. For example, <a href="https://arxiv.org/abs/1811.08883" rel="nofollow noreferrer">this paper</a> compared training with and without pretraining on imagenet. They claim that after 10k images, pretraining does not give better results but still allows to train faster.<br>
Then if you have a small dataset, your question still holds whether you should pretrain on imagenet or on another dataset. I think the answer to this question is given in the following paragraph (the references there are probably of interest to you):</p>
<blockquote>
<p><strong>Do we need big data?</strong> Yes. But a generic large-scale, classification-level pre-training set is not ideal if we take into account the extra effort of collecting and cleaning data—the cost of collecting ImageNet has been largely ignored, but the ‘pre-training’ step in the ‘pre-training +fine-tuning’ paradigm is in fact not free when we scale out this paradigm. If the gain of large-scale classification-level pre-training becomes exponentially diminishing [44, 30], it would be more effective to collect data in the target domain.</p>
</blockquote>
<p>Therefore, you also need to consider the quality of your satellite image dataset. Since it should be closer to your data than Imagenet it is probably better. </p> | 2020-01-27 10:30:34.487000+00:00 | 2020-01-27 10:30:34.487000+00:00 | null | null | 59,928,877 | <p>I am in search of a reference paper where I can find out that transfer learning needs to be from domain specific source model rather than using generalise model i.e., imagenet</p>
<p>For example Source dataset satellite/drone hyper/multi spectral images of plants and target dataset of hyper/multi spectral images of plants captured using agricultural robot</p>
<p>As compared to </p>
<p>Source dataset ImageNet model and target dataset images of plants captured using agricultural robot</p> | 2020-01-27 10:05:36.663000+00:00 | 2020-01-27 10:30:34.487000+00:00 | null | machine-learning|deep-learning|computer-vision|conv-neural-network|transfer-learning | ['https://arxiv.org/abs/1811.08883'] | 1 |
66,403,831 | <p>To start I would recommend using stride 1, not two in your first layer. In the first two layers (conv2d and maxpool) you are already downsampling the image to 16x16, and the network hasn't had a chance to do much. You want to have at least a few layers for each unique number of filters before you downsample.</p>
<p>E.g architecture that might work better:</p>
<p>(conv2d 64 filters, stride 1) x 3</p>
<p>conv2d 128 filters, stride 2</p>
<p>(conv2d 128 filters, stride 1) x 3</p>
<p>conv2d 256 filters, stride 2</p>
<p>(conv2d 256 filters, stride 1) x 3</p>
<p>flatten</p>
<p>dense layers</p>
<p>For more ideas for architecture design, I recommend looking at models such as VGG:
<a href="https://arxiv.org/pdf/1409.1556.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1409.1556.pdf</a> (page 3).</p>
<p>You won't be able to verbatim copy those since your data is smaller, but note how they have more convolutional layers and less dense layers. They also don't downsample as harshly in the beginning.</p>
<p>I'm also curious as to the size of your dataset, and what is your training/validation/testing split. Do you successfully get close to 100% accuracy on the training data?</p> | 2021-02-27 21:57:28.497000+00:00 | 2021-02-27 22:19:10.443000+00:00 | 2021-02-27 22:19:10.443000+00:00 | null | 66,403,770 | <p>I've been trying to train a 2D CNN for an image classification problem. My data consists of 64 by 64 pixel images each labeled with a number from 1-37. I have my CNN architecture below:</p>
<pre><code>train_dataset = train.flow_from_directory('/kaggle/input/temp-frames/frames/train', target_size=(64,64), batch_size=256, class_mode='categorical')
validation_dataset = train.flow_from_directory('/kaggle/input/temp-frames/frames/validation', target_size=(64,64), batch_size=256, class_mode='categorical')
model = Sequential()
model.add(Conv2D(filters= 64, kernel_size=(3,3), activation ='relu',strides = (2,2), padding = 'valid', input_shape= (64,64,3)))
model.add(MaxPooling2D(pool_size=(2,2), padding='same'))
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(.5))
model.add(Dense(37))
model.add(Activation('softmax'))
optimizer = keras.optimizers.Adam(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(train_dataset, epochs = 100, batch_size = 32, validation_data = validation_dataset, shuffle = True)
</code></pre>
<p>For some reason, my 2D CNN(get accuracy of 16%) performs worse than my 1D CNN(gets accuracy of 30%). I am wondering if there is any way to improve my model to achieve better results.</p> | 2021-02-27 21:48:27.773000+00:00 | 2021-02-28 01:01:15.527000+00:00 | 2021-02-28 01:01:15.527000+00:00 | python|tensorflow|keras|deep-learning|conv-neural-network | ['https://arxiv.org/pdf/1409.1556.pdf'] | 1 |
69,755,703 | <p>There are two ways to train a BERT-based classification model:</p>
<ol>
<li><p><strong>Finetuning</strong>: Which is the practice of training your classifier along with your text encoder (BERT in this case, but it can be any other text encoder, e.g., RoBERTa, ALBERT...). In this setting, the encoder and the classifier are both trained at the same time.</p>
</li>
<li><p><strong>BERT as an embedding model</strong>: Here you freeze the weights of BERT, and you only train the classifier. At the end of such a setting, BERT would be exactly the same as before training.</p>
</li>
</ol>
<p>Research has shown that finetuning delivers slightly better results than when using BERT as embeddings. You can find the original research paper where these results are discussed <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="nofollow noreferrer">here</a>.</p>
<p>What you are suggesting though is a tradeoff between the two, and it is interesting. I never tried that out myself, but I suspect you will run into overfitting since you train your classifier twice on the same data. So I suppose you will have better results than freezing BERT, but your model will have harder time generalizing to unseen data than the finetuning method.</p>
<p>Yacine</p> | 2021-10-28 14:13:40.020000+00:00 | 2021-10-28 14:18:49.110000+00:00 | 2021-10-28 14:18:49.110000+00:00 | null | 68,524,992 | <p>I have a question about training BERT classification(or pretrained model).</p>
<p>BERT classifier model usually constructed 2 models. BERT model and classifier.</p>
<p>Many BERT fine tuning example code is training BERT model and classifier layer at once.
But I think, classifier is training first and BERT weight should not updated. After classifier trained, training all model layers.</p>
<p>Example</p>
<pre class="lang-py prettyprint-override"><code>import torch
from transformers import BertForSequenceClassification
model = BertForSequenceClassification()
...
# training1
for name, param in model.named_parameters():
if 'classifier' in name:
param.requires_grad = True # only classifier update
else:
param.requires_grad = False # tied other layer
...
# And after training1, we can using BERT model that is trained only classfier.
model = BertForSequenceClassification()
model.load_state_dict(torch.load({model only trained classifier})
for name, param in model.named_parameters():
param.requires_grad = True # training all
# training BERT Classification model
</code></pre>
<p>Why BERT Classification model training at once?
Thank you.</p> | 2021-07-26 05:43:22.127000+00:00 | 2021-10-28 14:18:49.110000+00:00 | null | machine-learning|nlp|huggingface-transformers|bert-language-model | ['https://arxiv.org/pdf/1810.04805.pdf'] | 1 |
63,020,188 | <p>During training, varying batch statistics act as a regularization mechanism that can improve ability to generalize. This can help to minimize overfitting when training for a high number of iterations. Indeed, using a very large batch size <a href="https://arxiv.org/abs/1804.07612" rel="noreferrer">can harm generalization</a> as there is less variation in batch statistics, decreasing regularization.</p>
<p>When fine-tuning on a new dataset, batch statistics are likely to be very different if fine-tuning examples have different characteristics to examples in the original training dataset. Therefore, if batch normalization is not frozen, the network will learn new batch normalization parameters (gamma and beta in the <a href="https://arxiv.org/abs/1502.03167" rel="noreferrer">batch normalization paper</a>) that are different to what the other network paramaters have been optimised for during the original training. Relearning all the other network parameters is often undesirable during fine-tuning, either due to the required training time or small size of the fine-tuning dataset. Freezing batch normalization avoids this issue.</p> | 2020-07-21 17:46:29.237000+00:00 | 2020-07-21 17:53:00.990000+00:00 | 2020-07-21 17:53:00.990000+00:00 | null | 63,016,740 | <p>The following content comes from Keras tutorial</p>
<blockquote>
<p>This behavior has been introduced in TensorFlow 2.0, in order to enable layer.trainable = False to produce the most commonly expected behavior in the convnet fine-tuning use case.</p>
</blockquote>
<p>Why we should freeze the layer when fine-tuning a convolutional neural network? Is it because some mechanisms in tensorflow keras or because of the algorithm of batch normalization? I run an experiment myself and I found that if trainable is not set to false the model tends to catastrophic forgetting what has been learned before and returns very large loss at first few epochs. What's the reason for that?</p> | 2020-07-21 14:26:10.840000+00:00 | 2020-07-21 17:53:00.990000+00:00 | null | python|tensorflow|keras|tensorflow2.0|batch-normalization | ['https://arxiv.org/abs/1804.07612', 'https://arxiv.org/abs/1502.03167'] | 2 |
73,231,817 | <p>With the limited information that you provide, this is the simplest solution (I assume that your generator creates images from noise such as the <a href="https://arxiv.org/abs/1406.2661" rel="nofollow noreferrer">original gans</a>):</p>
<pre class="lang-py prettyprint-override"><code>import torch
def get_data(batch_size, generator, latent_dim=512):
z = torch.randn(batch_size, latent_dim)
return genenerator(z)
def dataloader(batch_size, generator, iteration, latent_dim=512):
for i in range(iteration):
yield(get_data(batch_size, generator, latent_dim))
batch_size = 64
generator = GANs(...)
iteration = 100
latent_dim = 512
loader = dataloader(batch_size, generator, iteration, latent_dim)
for images in loader:
# do something
</code></pre> | 2022-08-04 07:21:22.823000+00:00 | 2022-08-04 07:29:19.240000+00:00 | 2022-08-04 07:29:19.240000+00:00 | null | 73,228,139 | <p>I have a generator that creates synthetic data. How can I convert this into a PyTorch dataloader?</p> | 2022-08-03 21:44:48.987000+00:00 | 2022-08-04 07:29:33.003000+00:00 | null | pytorch|pytorch-lightning | ['https://arxiv.org/abs/1406.2661'] | 1 |
49,733,160 | <p>It will make is much easier you you will share a small dataset that illustrate the problem. However, I will state some of the issues with non-standards datasets and how to overcome them.</p>
<p><strong>Possible solutions</strong></p>
<ol>
<li><p><em>Regularization and validation-based optimization</em> - are methods that are always good to try when looking for some extra-accuracy. See dropout methods <a href="http://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf?utm_content=buffer79b43&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer" rel="nofollow noreferrer">here</a> (original paper), and some overview <a href="https://arxiv.org/pdf/1708.02182.pdf" rel="nofollow noreferrer">here</a>.</p></li>
<li><p><em>Unbalanced data</em> - Sometimes of the time series categories/events behave like anomalies, or just in unbalanced ways. If you read a book, words like <em>the</em> or <em>it</em> will appear much more times than <em>warehouse</em> or such. This can become a problem if your main task is to detect the word <em>warehouse</em> and you train your network (even lstms) in traditional ways. A way to overcome this problem is by balancing the samples (creating balanced datasets) or to give more weight to low-frequent categories.</p></li>
<li><p><em>Model structure</em> - sometimes fully connected layers are not enough. See computer vision problems for instance, where we train using convolution layers. The convolution and pooling layers enforce structure on the model, which is suitable for images. This is also some sort of regulation, since we have less parameters in those layers. In time-series problems, convolutions are also possible and turns out that works just fine. See example in Conditional <a href="https://arxiv.org/pdf/1703.04691.pdf" rel="nofollow noreferrer">Time Series Forecasting with Convolution Neural Networks</a>.</p></li>
</ol>
<p>The above suggestions are presented in the order I would suggest to try.</p>
<p>Good luck!</p> | 2018-04-09 12:37:22.563000+00:00 | 2018-04-09 12:37:22.563000+00:00 | null | null | 49,731,937 | <p>I've been working on this neural network with the intent to predict TBA (time based availability) of simulated windmill parks based on certain attributes. The neural network runs just fine, and gives me some predictions, however I'm not quite satisfied with the results. It fails to notice some very obvious correlations that I can clearly see by myself. Here is my current code: </p>
<pre><code>`# Import
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
maxi = 0.96
mini = 0.7
# Make data a np.array
data = pd.read_csv('datafile_ML_no_avg.csv')
data = data.values
# Shuffle the data
shuffle_indices = np.random.permutation(np.arange(len(data)))
data = data[shuffle_indices]
# Training and test data
data_train = data[0:int(len(data)*0.8),:]
data_test = data[int(len(data)*0.8):int(len(data)),:]
# Scale data
scaler = MinMaxScaler(feature_range=(mini, maxi))
scaler.fit(data_train)
data_train = scaler.transform(data_train)
data_test = scaler.transform(data_test)
# Build X and y
X_train = data_train[:, 0:5]
y_train = data_train[:, 6:7]
X_test = data_test[:, 0:5]
y_test = data_test[:, 6:7]
# Number of stocks in training data
n_args = X_train.shape[1]
multi = int(8)
# Neurons
n_neurons_1 = 8*multi
n_neurons_2 = 4*multi
n_neurons_3 = 2*multi
n_neurons_4 = 1*multi
# Session
net = tf.InteractiveSession()
# Placeholder
X = tf.placeholder(dtype=tf.float32, shape=[None, n_args])
Y = tf.placeholder(dtype=tf.float32, shape=[None,1])
# Initialize1s
sigma = 1
weight_initializer = tf.variance_scaling_initializer(mode="fan_avg",
distribution="uniform", scale=sigma)
bias_initializer = tf.zeros_initializer()
# Hidden weights
W_hidden_1 = tf.Variable(weight_initializer([n_args, n_neurons_1]))
bias_hidden_1 = tf.Variable(bias_initializer([n_neurons_1]))
W_hidden_2 = tf.Variable(weight_initializer([n_neurons_1, n_neurons_2]))
bias_hidden_2 = tf.Variable(bias_initializer([n_neurons_2]))
W_hidden_3 = tf.Variable(weight_initializer([n_neurons_2, n_neurons_3]))
bias_hidden_3 = tf.Variable(bias_initializer([n_neurons_3]))
W_hidden_4 = tf.Variable(weight_initializer([n_neurons_3, n_neurons_4]))
bias_hidden_4 = tf.Variable(bias_initializer([n_neurons_4]))
# Output weights
W_out = tf.Variable(weight_initializer([n_neurons_4, 1]))
bias_out = tf.Variable(bias_initializer([1]))
# Hidden layer
hidden_1 = tf.nn.relu(tf.add(tf.matmul(X, W_hidden_1), bias_hidden_1))
hidden_2 = tf.nn.relu(tf.add(tf.matmul(hidden_1, W_hidden_2),
bias_hidden_2))
hidden_3 = tf.nn.relu(tf.add(tf.matmul(hidden_2, W_hidden_3),
bias_hidden_3))
hidden_4 = tf.nn.relu(tf.add(tf.matmul(hidden_3, W_hidden_4),
bias_hidden_4))
# Output layer (transpose!)
out = tf.transpose(tf.add(tf.matmul(hidden_4, W_out), bias_out))
# Cost function
mse = tf.reduce_mean(tf.squared_difference(out, Y))
# Optimizer
opt = tf.train.AdamOptimizer().minimize(mse)
# Init
net.run(tf.global_variables_initializer())
# Fit neural net
batch_size = 10
mse_train = []
mse_test = []
# Run
epochs = 10
for e in range(epochs):
# Shuffle training data
shuffle_indices = np.random.permutation(np.arange(len(y_train)))
X_train = X_train[shuffle_indices]
y_train = y_train[shuffle_indices]
# Minibatch training
for i in range(0, len(y_train) // batch_size):
start = i * batch_size
batch_x = X_train[start:start + batch_size]
batch_y = y_train[start:start + batch_size]
# Run optimizer with batch
net.run(opt, feed_dict={X: batch_x, Y: batch_y})
# Show progress
if np.mod(i, 50) == 0:
mse_train.append(net.run(mse, feed_dict={X: X_train, Y: y_train}))
mse_test.append(net.run(mse, feed_dict={X: X_test, Y: y_test}))
pred = net.run(out, feed_dict={X: X_test})
print(pred)`
</code></pre>
<p>Have tried to tweak around with the number of hidden layers, number of nodes per layer, number of epochs to run and trying different activation functions and optimizers. However, I am quite new to neural networks, so there might be something very obvious that I'm missing. </p>
<p>Thanks in advance to anyone who managed to read through all of that.</p> | 2018-04-09 11:32:10.473000+00:00 | 2018-04-09 12:37:22.563000+00:00 | null | python|tensorflow|machine-learning|neural-network|deep-learning | ['http://www.jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf?utm_content=buffer79b43&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer', 'https://arxiv.org/pdf/1708.02182.pdf', 'https://arxiv.org/pdf/1703.04691.pdf'] | 3 |
45,465,537 | <p>There is a paper evaluating these choices, which can be found here: <a href="https://arxiv.org/pdf/1606.02228.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1606.02228.pdf</a>. They do get better accuracy by using PReLU, but that is very minor. I am unsure if the improvement offsets the higher workload you have to do by using PReLU instead of ReLU. The question is are you already evaluating for that last percentage point in accuracy? If not do not bother yet with choices that only have minor impact on the performance of the model.</p> | 2017-08-02 15:52:04.673000+00:00 | 2017-08-02 15:52:04.673000+00:00 | null | null | 45,463,392 | <p>I have a network as follows</p>
<pre><code>BN-Scale-ReLU
</code></pre>
<p>I want to replace ReLU by PReLU. Then, it will be</p>
<pre><code>BN-Scale-PReLU
</code></pre>
<p>Could I obtain any gain with the second setting? Why? As I search,The second setting is not so popular. In some paper, they replaced BN-Scale-ReLU=PReLU. Is it right? </p> | 2017-08-02 14:16:04.347000+00:00 | 2017-08-02 15:52:04.673000+00:00 | 2017-08-02 14:29:34.580000+00:00 | machine-learning|tensorflow|deep-learning|caffe | ['https://arxiv.org/pdf/1606.02228.pdf'] | 1 |
59,039,646 | <p>As @sam-h mentions in his comment, this is an area of ongoing research. </p>
<p>There's no standard or automatic approach, so there's no one best-practice to recommend – you'll likely have to sift through the various papers, in the list `sam-h provided and from elsewhere, for ideas. </p>
<p>In many cases, approaches don't use standard word2vec – adding extra steps before or during training – because standard word2vec is oblivious to the fact that a single word-token might have multiple contrasting senses. As a result, the standard word2vec vectors for words with many senses can wind up with a single vector that "mushes together" the many distinct senses.</p>
<p>One interesting write-up that does manage to bootstrap a model of multiple-senses from existing, word-sense-oblivous word-vectors is described in the paper "<a href="https://arxiv.org/abs/1601.03764" rel="nofollow noreferrer">Linear Algebraic Structure of Word Senses, with Applications to Polysemy</a>", which also has a less-formal <a href="http://www.offconvex.org/2016/07/10/embeddingspolysemy/" rel="nofollow noreferrer">blogpost write-up</a>.</p>
<p>Essentially, by assuming the rich space of all standard word-vectors actually draw from a smaller number of "discourses", and interpreting word-vectors as some combination of the alternate "atoms of discourse" (for their difference senses), they can tease-out the alternate senses of word-tokens that began with only a single vector. </p> | 2019-11-25 20:30:48.460000+00:00 | 2019-11-25 20:30:48.460000+00:00 | null | null | 59,007,436 | <p>I know how word2vec works, but I am having trouble with finding out how to implement word sense disambiguation using word2vec. Can you help with the process?</p> | 2019-11-23 12:07:33.473000+00:00 | 2019-11-25 20:30:48.460000+00:00 | 2019-11-23 12:11:44.150000+00:00 | python|nlp|word2vec|unsupervised-learning|word-sense-disambiguation | ['https://arxiv.org/abs/1601.03764', 'http://www.offconvex.org/2016/07/10/embeddingspolysemy/'] | 2 |
9,190,840 | <p>I don't have a complete answer, but these manipulations tend to 'just work'. A relevant paper might be <a href="http://arxiv.org/abs/math/0212377v1" rel="noreferrer">Objects of Categories as Complex Numbers by Fiore and Leinster</a> - I came across that one while reading <a href="http://blog.sigfpe.com/2007/09/arboreal-isomorphisms-from-nuclear.html" rel="noreferrer">sigfpe's blog on a related subject</a> ; the rest of that blog is a goldmine for similar ideas and is worth checking out!</p>
<p>You can also differentiate datatypes, by the way - that will get you the appropriate Zipper for the datatype!</p> | 2012-02-08 09:45:53.233000+00:00 | 2012-02-08 09:45:53.233000+00:00 | null | null | 9,190,352 | <p>The 'algebraic' expression for algebraic data types looks very suggestive to someone with a background in mathematics. Let me try to explain what I mean.</p>
<p>Having defined the basic types</p>
<ul>
<li>Product <code>•</code></li>
<li>Union <code>+</code></li>
<li>Singleton <code>X</code></li>
<li>Unit <code>1</code></li>
</ul>
<p>and using the shorthand <code>X²</code> for <code>X•X</code> and <code>2X</code> for <code>X+X</code> et cetera, we can then define algebraic expressions for e.g. linked lists</p>
<p><code>data List a = Nil | Cons a (List a)</code> ↔ <code>L = 1 + X • L</code></p>
<p>and binary trees:</p>
<p><code>data Tree a = Nil | Branch a (Tree a) (Tree a)</code> ↔ <code>T = 1 + X • T²</code></p>
<p>Now, my first instinct as a mathematician is to go nuts with these expressions, and try to solve for <code>L</code> and <code>T</code>. I could do this through repeated substitution, but it seems much easier to abuse the notation horrifically and pretend I can rearrange it at will. For example, for a linked list:</p>
<p><code>L = 1 + X • L</code></p>
<p><code>(1 - X) • L = 1</code></p>
<p><code>L = 1 / (1 - X) = 1 + X + X² + X³ + ...</code></p>
<p>where I've used the power series expansion of <code>1 / (1 - X)</code> in a totally unjustified way to derive an interesting result, namely that an <code>L</code> type is either <code>Nil</code>, or it contains 1 element, or it contains 2 elements, or 3, etc.</p>
<p>It gets more interesting if we do it for binary trees:</p>
<p><code>T = 1 + X • T²</code></p>
<p><code>X • T² - T + 1 = 0</code></p>
<p><code>T = (1 - √(1 - 4 • X)) / (2 • X)</code></p>
<p><code>T = 1 + X + 2 • X² + 5 • X³ + 14 • X⁴ + ...</code></p>
<p>again, using the power series expansion (done with <a href="http://www.wolframalpha.com/input/?i=%281+-+sqrt%281-4x%29%29+%2F+%282x%29" rel="noreferrer">Wolfram Alpha</a>). This expresses the non-obvious (to me) fact that there is only one binary tree with 1 element, 2 binary trees with two elements (the second element can be on the left or the right branch), 5 binary trees with three elements etc.</p>
<p>So my question is - what am I doing here? These operations seem unjustified (what exactly is the square root of an algebraic data type anyway?) but they lead to sensible results. does the quotient of two algebraic data types have any meaning in computer science, or is it just notational trickery?</p>
<p>And, perhaps more interestingly, is it possible to extend these ideas? Is there a theory of the algebra of types that allows, for example, arbitrary functions on types, or do types require a power series representation? If you can define a class of functions, then does composition of functions have any meaning?</p> | 2012-02-08 09:09:49.713000+00:00 | 2020-04-01 07:35:31.767000+00:00 | 2020-02-28 17:39:30.747000+00:00 | haskell|functional-programming|algebraic-data-types|miranda | ['http://arxiv.org/abs/math/0212377v1', 'http://blog.sigfpe.com/2007/09/arboreal-isomorphisms-from-nuclear.html'] | 2 |
9,199,166 | <p>Binary trees are defined by the equation <code>T=1+XT^2</code> in the semiring of types. By construction, <code>T=(1-sqrt(1-4X))/(2X)</code> is defined by the same equation in the semiring of complex numbers. So given that we're solving the same equation in the same class of algebraic structure it actually shouldn't be surprising that we see some similarities.</p>
<p>The catch is that when we reason about polynomials in the semiring of complex numbers we typically use the fact that the complex numbers form a ring or even a field so we find ourselves using operations such as subtraction that don't apply to semirings. But we can often eliminate subtractions from our arguments if we have a rule that allows us to cancel from both sides of an equation. This is the kind of thing proved by <a href="http://arxiv.org/abs/math/0212377v1">Fiore and Leinster</a> showing that many arguments about rings can be transferred to semirings.</p>
<p>This means that lots of your mathematical knowledge about rings can be reliably transferred to types. As a result, some arguments involving complex numbers or power series (in the ring of formal power series) can carry over to types in a completely rigorous way.</p>
<p>However there's more to the story than this. It's one thing to prove two types are equal (say) by showing two power series are equal. But you can also deduce information about types by inspecting the terms in the power series. I'm not sure of what the formal statement here should be. (I recommend Brent Yorgey's <a href="http://www.cis.upenn.edu/~byorgey/pub/species-pearl.pdf">paper</a> on <a href="http://en.wikipedia.org/wiki/Combinatorial_species">combinatorial species</a> for some work that's closely related but species are <em>not</em> the same as types.)</p>
<p>What I find utterly mind blowing is that what you've discovered can be extended to calculus. Theorems about calculus can be transferred over to the semiring of types. In fact, even arguments about finite differences can be transferred over and you find that classical theorems from numerical analysis have interpretations in type theory.</p>
<p>Have fun!</p> | 2012-02-08 18:21:36.297000+00:00 | 2012-02-08 18:21:36.297000+00:00 | null | null | 9,190,352 | <p>The 'algebraic' expression for algebraic data types looks very suggestive to someone with a background in mathematics. Let me try to explain what I mean.</p>
<p>Having defined the basic types</p>
<ul>
<li>Product <code>•</code></li>
<li>Union <code>+</code></li>
<li>Singleton <code>X</code></li>
<li>Unit <code>1</code></li>
</ul>
<p>and using the shorthand <code>X²</code> for <code>X•X</code> and <code>2X</code> for <code>X+X</code> et cetera, we can then define algebraic expressions for e.g. linked lists</p>
<p><code>data List a = Nil | Cons a (List a)</code> ↔ <code>L = 1 + X • L</code></p>
<p>and binary trees:</p>
<p><code>data Tree a = Nil | Branch a (Tree a) (Tree a)</code> ↔ <code>T = 1 + X • T²</code></p>
<p>Now, my first instinct as a mathematician is to go nuts with these expressions, and try to solve for <code>L</code> and <code>T</code>. I could do this through repeated substitution, but it seems much easier to abuse the notation horrifically and pretend I can rearrange it at will. For example, for a linked list:</p>
<p><code>L = 1 + X • L</code></p>
<p><code>(1 - X) • L = 1</code></p>
<p><code>L = 1 / (1 - X) = 1 + X + X² + X³ + ...</code></p>
<p>where I've used the power series expansion of <code>1 / (1 - X)</code> in a totally unjustified way to derive an interesting result, namely that an <code>L</code> type is either <code>Nil</code>, or it contains 1 element, or it contains 2 elements, or 3, etc.</p>
<p>It gets more interesting if we do it for binary trees:</p>
<p><code>T = 1 + X • T²</code></p>
<p><code>X • T² - T + 1 = 0</code></p>
<p><code>T = (1 - √(1 - 4 • X)) / (2 • X)</code></p>
<p><code>T = 1 + X + 2 • X² + 5 • X³ + 14 • X⁴ + ...</code></p>
<p>again, using the power series expansion (done with <a href="http://www.wolframalpha.com/input/?i=%281+-+sqrt%281-4x%29%29+%2F+%282x%29" rel="noreferrer">Wolfram Alpha</a>). This expresses the non-obvious (to me) fact that there is only one binary tree with 1 element, 2 binary trees with two elements (the second element can be on the left or the right branch), 5 binary trees with three elements etc.</p>
<p>So my question is - what am I doing here? These operations seem unjustified (what exactly is the square root of an algebraic data type anyway?) but they lead to sensible results. does the quotient of two algebraic data types have any meaning in computer science, or is it just notational trickery?</p>
<p>And, perhaps more interestingly, is it possible to extend these ideas? Is there a theory of the algebra of types that allows, for example, arbitrary functions on types, or do types require a power series representation? If you can define a class of functions, then does composition of functions have any meaning?</p> | 2012-02-08 09:09:49.713000+00:00 | 2020-04-01 07:35:31.767000+00:00 | 2020-02-28 17:39:30.747000+00:00 | haskell|functional-programming|algebraic-data-types|miranda | ['http://arxiv.org/abs/math/0212377v1', 'http://www.cis.upenn.edu/~byorgey/pub/species-pearl.pdf', 'http://en.wikipedia.org/wiki/Combinatorial_species'] | 3 |
56,788,417 | <hr>
<blockquote>
<p>Did I am suppose to use a pre-train model?</p>
</blockquote>
<p>Yes you should, unless you are super confident that you can find a working model directly by urself. </p>
<hr>
<blockquote>
<p>But the is no pre-train model using lidar image</p>
</blockquote>
<p>First I`m pretty sure there are LIDAR based network .e.g </p>
<blockquote>
<p>L Caltagirone , LIDAR-Camera Fusion for Road Detection Using Fully
Convolutional ... arxiv, 2018</p>
</blockquote>
<p>Second, even if there is no open source implementation for direct LIDAR-based, You can always convert the LIDAR to the depth image. For Depth image based CNN, there are hundreds of implementation for segmentation and detection. </p>
<hr>
<blockquote>
<p>How am I suppose to do it?</p>
</blockquote>
<p>First, you can place them side by side parallel, for RGB and depth/LIDAR 3d pointcloud. Feed them separately</p>
<p>Second, you can also combine them by merging the input to 4D tensor and transfer the initial weight to the single model. At last perform transfer learning in your given dataset. </p>
<hr>
<blockquote>
<p>best CNN algorithm?</p>
</blockquote>
<p>Totally depends on your task and hardware. Do you need best in processing speed or best in accuracy? Define your "best", please. </p>
<p>ALso Are you using it for autonomous car or for in-house nurse care system? different CNN system customizes the weight for different purposes. </p>
<p>Generally, for real-time multiple object detection using a cheap PC e.g DJI manifold, I would suggest Yolo-tiny</p> | 2019-06-27 09:50:22.577000+00:00 | 2019-06-27 09:50:22.577000+00:00 | null | null | 56,629,509 | <p>I obtain depth & reflectance maps from Lidar (2D images) and I have also camera images (2D images). Image have the same size.</p>
<p>I want to use CNN to perform object detection using both images. It is a sort of "fusion CNN"</p>
<p>How am I suppose to do it? Did I am suppose to use a pre-train model? But the is no pre-train model using lidar images..</p>
<p>Which is the best CNN algorithm to do it? ie for performing fusion of modalities for object detection</p>
<p>Thanks you in advance</p> | 2019-06-17 10:33:12.093000+00:00 | 2019-06-27 09:50:22.577000+00:00 | null | camera|object-detection|fusion|lidar | [] | 0 |
39,260,334 | <p>You do not provide deep nets with supervision on each layer, this would be too complex in terms of building the dataset. What you see on these slides is interpretation of what is happening <strong>on its own</strong>, not what <strong>we enforce</strong>. There are both layer-by-layer techniques (less popular now) and everything-jointly (popular know), but neither of them use additional supervision, you do not tell network to extract edges, it simply emerges from the optimization problem and network structure in practise. </p>
<p>However, there are also deep architectures that do not have this property, like <a href="https://arxiv.org/abs/1603.09382" rel="nofollow">https://arxiv.org/abs/1603.09382</a> or in general - recurrent nets (which are also "deep" in this sense). Thus do not treat this as a property of the deep learning, this is simply <strong>a common empirical observation when dealing with particular data</strong>, nothing less and nothing more.</p> | 2016-08-31 23:03:53.073000+00:00 | 2016-08-31 23:03:53.073000+00:00 | null | null | 39,258,184 | <p>I'm trying to grasp the concepts of deep neural networks. When they are explained, they basically say that each layer of the network represents one level of abstraction, for example, the first layer is about edges, next layer is about shapes, like wheels and the next layer about what the wheels add up to, like a car.</p>
<p>This image pretty much represents the concept:
<a href="https://i.stack.imgur.com/795lQ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/795lQ.jpg" alt="enter image description here"></a></p>
<p>When figuring out the weights for each layer, is this done one layer at the time or all layers together. Do you first run the AI on a set of images labeled with different kinds of edges and then a set of images labeled with things like wheels and then on a set of images labeled with cars or do you let the network figure that out for itself?</p> | 2016-08-31 20:08:23.153000+00:00 | 2016-08-31 23:03:53.073000+00:00 | null | artificial-intelligence | ['https://arxiv.org/abs/1603.09382'] | 1 |
42,831,615 | <p>Gensim is data source agnostic. For most of its functionality, it just requires a list of sentences as a document. Actually these documents can even consist of made-up words (i.e. for using word2vec <a href="https://arxiv.org/abs/1403.6652" rel="nofollow noreferrer">on graphs</a>).</p>
<p>For parsing wikipedia dumps and other common corpus types, it provides <a href="https://github.com/RaRe-Technologies/gensim/tree/master/gensim/corpora" rel="nofollow noreferrer">some utility classes</a>. Check its <a href="http://radimrehurek.com/gensim/apiref.html" rel="nofollow noreferrer">API docs</a> of <code>corpora.*</code></p> | 2017-03-16 10:36:23.030000+00:00 | 2017-03-16 10:36:23.030000+00:00 | null | null | 42,389,748 | <p>Okay, this is a specific question about what data structure is required when providing training data to the Gensim python library. In particular, there must be an implicit understanding of what constitutes a document in any data that it is provided (otherwise it wouldn't, for instance, be able to find the tf-idf).</p>
<p>For a specific example, the wikipedia dump is used in the tutorials for the library for training purposes. The wikipedia dump is provided in XML. What gives gensim an understanding of separate documents? Is this understanding predicated on the nesing of xml elements? </p> | 2017-02-22 11:16:52+00:00 | 2017-03-16 10:36:23.030000+00:00 | null | python|gensim | ['https://arxiv.org/abs/1403.6652', 'https://github.com/RaRe-Technologies/gensim/tree/master/gensim/corpora', 'http://radimrehurek.com/gensim/apiref.html'] | 3 |
69,645,190 | <p>You cannot invert a generic nonlinear NN, however there is a NN architecture that will allow you to do this trivially (first proposed here, section 3.2 <a href="https://arxiv.org/pdf/1605.08803.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1605.08803.pdf</a>).</p>
<p>Basically, you split the input to 2 vectors, u1 and u2, and then transform it like this:</p>
<p>v1 = f1(u2) + g1(u2)u1</p>
<p>v2 = u2</p>
<p>Then, the inverse is:</p>
<p>u1 = (v1 - f(v2)) / g(v2)</p>
<p>u2 = v2</p>
<p>Note that to calculate an inverse a division by g is used (multiplication by g inverse), so you have to make sure that the inverse of g exists.</p> | 2021-10-20 11:31:21.587000+00:00 | 2021-10-20 11:31:21.587000+00:00 | null | null | 65,938,921 | <p>I'm training a Neural Network that, given some inputs that here we'll call x and y, is able to predict the output, z. So, z=f(x,y)
Once the neural network is correctly trained, I'd like to be able to obtain a model that, given z and x, returns the other input:
What i mean is to obtain the model for which:
y=g(x,z)
Is it possible in Tensorflow?
Thak you in advance!</p> | 2021-01-28 14:17:11.310000+00:00 | 2021-10-20 11:31:21.587000+00:00 | null | tensorflow|keras|neural-network|regression|inverse | ['https://arxiv.org/pdf/1605.08803.pdf'] | 1 |
29,725,922 | <p>Most of time tanh is quickly converge than sigmoid and logistic function, and performs better accuracy <a href="http://www.cscjournals.org/manuscript/Journals/IJAE/volume1/Issue4/IJAE-26.pdf">[1]</a>. However, recently rectified linear unit (ReLU) is proposed by Hinton <a href="http://machinelearning.wustl.edu/mlpapers/paper_files/icml2010_NairH10.pdf">[2]</a> which shows ReLU train six times fast than tanh <a href="http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf">[3]</a> to reach same training error. And you can refer to <a href="http://en.wikipedia.org/wiki/Rectifier_%28neural_networks%29">[4]</a> to see what benefits ReLU provides.</p>
<hr>
<p>Accordining to about 2 years machine learning experience. I want to share some stratrgies the most paper used and my experience about computer vision.</p>
<h2>Normalizing input is very important</h2>
<p>Normalizing well could get better performance and converge quickly. Most of time we will subtract mean value to make input mean to be zero to prevent weights change same directions so that converge slowly <a href="http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf">[5]</a> .Recently google also points that phenomenon as internal covariate shift out when training deep learning, and they proposed batch normalization <a href="http://arxiv.org/pdf/1502.03167v3.pdf">[6]</a> so as to normalize each vector having zero mean and unit variance.</p>
<h2>More data more accuracy</h2>
<p>More training data could generize feature space well and prevent overfitting. In computer vision if training data is not enough, most of used skill to increase training dataset is data argumentation and synthesis training data.</p>
<h2>Choosing a good activation function allows training better and efficiently.</h2>
<p>ReLU nonlinear acitivation worked better and performed state-of-art results in deep learning and MLP. Moreover, it has some benefits e.g. simple to implementation and cheaper computation in back-propagation to efficiently train more deep neural net. However, ReLU will get zero gradient and do not train when the unit is zero active. Hence some modified ReLUs are proposed e.g. Leaky ReLU, and Noise ReLU, and most popular method is PReLU <a href="http://arxiv.org/pdf/1502.01852v1.pdf">[7]</a> proposed by Microsoft which generalized the traditional recitifed unit.</p>
<h2>Others</h2>
<ul>
<li>choose large initial learning rate if it will not oscillate or diverge so as to find a better global minimum.</li>
<li>shuffling data</li>
</ul> | 2015-04-19 04:49:34.787000+00:00 | 2015-04-19 04:49:34.787000+00:00 | null | null | 24,282,121 | <p>Im personally studying theories of neural network and got some questions. </p>
<p>In many books and references, for activation function of hidden layer, hyper-tangent functions were used. </p>
<p>Books came up with really simple reason that linear combinations of tanh functions can describe nearly all shape of functions with given error. </p>
<p>But, there came a question. </p>
<ol>
<li>Is this a real reason why tanh function is used?</li>
<li>If then, is it the only reason why tanh function is used?</li>
<li>if then, is tanh function the only function that can do that?</li>
<li>if not, what is the real reason?..</li>
</ol>
<p>I stock here keep thinking... please help me out of this mental(?...) trap!</p> | 2014-06-18 09:38:06.290000+00:00 | 2020-03-05 20:27:08.177000+00:00 | null | machine-learning|neural-network|hyperbolic-function | ['http://www.cscjournals.org/manuscript/Journals/IJAE/volume1/Issue4/IJAE-26.pdf', 'http://machinelearning.wustl.edu/mlpapers/paper_files/icml2010_NairH10.pdf', 'http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf', 'http://en.wikipedia.org/wiki/Rectifier_%28neural_networks%29', 'http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf', 'http://arxiv.org/pdf/1502.03167v3.pdf', 'http://arxiv.org/pdf/1502.01852v1.pdf'] | 7 |
14,115,627 | <p>The following is a demo implementation in C# (and Java, see end of answer) based on depth first search.</p>
<p>An outer loop scans all nodes of the graph and starts a search from every node. Node neighbours (according to the list of edges) are added to the cycle path. Recursion ends if no more non-visited neighbours can be added. A new cycle is found if the path is longer than two nodes and the next neighbour is the start of the path. To avoid duplicate cycles, the cycles are normalized by rotating the smallest node to the start. Cycles in inverted ordering are also taken into account.</p>
<p>This is just a naive implementation.
The classical paper is: Donald B. Johnson. Finding all the elementary circuits of a directed graph. SIAM J. Comput., 4(1):77–84, 1975.</p>
<p>A recent survey of modern algorithms can be found <a href="http://arxiv.org/pdf/1205.2766.pdf" rel="noreferrer">here</a></p>
<pre><code>using System;
using System.Collections.Generic;
namespace akCyclesInUndirectedGraphs
{
class Program
{
// Graph modelled as list of edges
static int[,] graph =
{
{1, 2}, {1, 3}, {1, 4}, {2, 3},
{3, 4}, {2, 6}, {4, 6}, {7, 8},
{8, 9}, {9, 7}
};
static List<int[]> cycles = new List<int[]>();
static void Main(string[] args)
{
for (int i = 0; i < graph.GetLength(0); i++)
for (int j = 0; j < graph.GetLength(1); j++)
{
findNewCycles(new int[] {graph[i, j]});
}
foreach (int[] cy in cycles)
{
string s = "" + cy[0];
for (int i = 1; i < cy.Length; i++)
s += "," + cy[i];
Console.WriteLine(s);
}
}
static void findNewCycles(int[] path)
{
int n = path[0];
int x;
int[] sub = new int[path.Length + 1];
for (int i = 0; i < graph.GetLength(0); i++)
for (int y = 0; y <= 1; y++)
if (graph[i, y] == n)
// edge referes to our current node
{
x = graph[i, (y + 1) % 2];
if (!visited(x, path))
// neighbor node not on path yet
{
sub[0] = x;
Array.Copy(path, 0, sub, 1, path.Length);
// explore extended path
findNewCycles(sub);
}
else if ((path.Length > 2) && (x == path[path.Length - 1]))
// cycle found
{
int[] p = normalize(path);
int[] inv = invert(p);
if (isNew(p) && isNew(inv))
cycles.Add(p);
}
}
}
static bool equals(int[] a, int[] b)
{
bool ret = (a[0] == b[0]) && (a.Length == b.Length);
for (int i = 1; ret && (i < a.Length); i++)
if (a[i] != b[i])
{
ret = false;
}
return ret;
}
static int[] invert(int[] path)
{
int[] p = new int[path.Length];
for (int i = 0; i < path.Length; i++)
p[i] = path[path.Length - 1 - i];
return normalize(p);
}
// rotate cycle path such that it begins with the smallest node
static int[] normalize(int[] path)
{
int[] p = new int[path.Length];
int x = smallest(path);
int n;
Array.Copy(path, 0, p, 0, path.Length);
while (p[0] != x)
{
n = p[0];
Array.Copy(p, 1, p, 0, p.Length - 1);
p[p.Length - 1] = n;
}
return p;
}
static bool isNew(int[] path)
{
bool ret = true;
foreach(int[] p in cycles)
if (equals(p, path))
{
ret = false;
break;
}
return ret;
}
static int smallest(int[] path)
{
int min = path[0];
foreach (int p in path)
if (p < min)
min = p;
return min;
}
static bool visited(int n, int[] path)
{
bool ret = false;
foreach (int p in path)
if (p == n)
{
ret = true;
break;
}
return ret;
}
}
}
</code></pre>
<p>The cycles for the demo graph:</p>
<pre><code>1,3,2
1,4,3,2
1,4,6,2
1,3,4,6,2
1,4,6,2,3
1,4,3
2,6,4,3
7,9,8
</code></pre>
<p>The algorithm coded in Java:</p>
<pre><code>import java.util.ArrayList;
import java.util.List;
public class GraphCycleFinder {
// Graph modeled as list of edges
static int[][] graph =
{
{1, 2}, {1, 3}, {1, 4}, {2, 3},
{3, 4}, {2, 6}, {4, 6}, {7, 8},
{8, 9}, {9, 7}
};
static List<int[]> cycles = new ArrayList<int[]>();
/**
* @param args
*/
public static void main(String[] args) {
for (int i = 0; i < graph.length; i++)
for (int j = 0; j < graph[i].length; j++)
{
findNewCycles(new int[] {graph[i][j]});
}
for (int[] cy : cycles)
{
String s = "" + cy[0];
for (int i = 1; i < cy.length; i++)
{
s += "," + cy[i];
}
o(s);
}
}
static void findNewCycles(int[] path)
{
int n = path[0];
int x;
int[] sub = new int[path.length + 1];
for (int i = 0; i < graph.length; i++)
for (int y = 0; y <= 1; y++)
if (graph[i][y] == n)
// edge refers to our current node
{
x = graph[i][(y + 1) % 2];
if (!visited(x, path))
// neighbor node not on path yet
{
sub[0] = x;
System.arraycopy(path, 0, sub, 1, path.length);
// explore extended path
findNewCycles(sub);
}
else if ((path.length > 2) && (x == path[path.length - 1]))
// cycle found
{
int[] p = normalize(path);
int[] inv = invert(p);
if (isNew(p) && isNew(inv))
{
cycles.add(p);
}
}
}
}
// check of both arrays have same lengths and contents
static Boolean equals(int[] a, int[] b)
{
Boolean ret = (a[0] == b[0]) && (a.length == b.length);
for (int i = 1; ret && (i < a.length); i++)
{
if (a[i] != b[i])
{
ret = false;
}
}
return ret;
}
// create a path array with reversed order
static int[] invert(int[] path)
{
int[] p = new int[path.length];
for (int i = 0; i < path.length; i++)
{
p[i] = path[path.length - 1 - i];
}
return normalize(p);
}
// rotate cycle path such that it begins with the smallest node
static int[] normalize(int[] path)
{
int[] p = new int[path.length];
int x = smallest(path);
int n;
System.arraycopy(path, 0, p, 0, path.length);
while (p[0] != x)
{
n = p[0];
System.arraycopy(p, 1, p, 0, p.length - 1);
p[p.length - 1] = n;
}
return p;
}
// compare path against known cycles
// return true, iff path is not a known cycle
static Boolean isNew(int[] path)
{
Boolean ret = true;
for(int[] p : cycles)
{
if (equals(p, path))
{
ret = false;
break;
}
}
return ret;
}
static void o(String s)
{
System.out.println(s);
}
// return the int of the array which is the smallest
static int smallest(int[] path)
{
int min = path[0];
for (int p : path)
{
if (p < min)
{
min = p;
}
}
return min;
}
// check if vertex n is contained in path
static Boolean visited(int n, int[] path)
{
Boolean ret = false;
for (int p : path)
{
if (p == n)
{
ret = true;
break;
}
}
return ret;
}
}
</code></pre> | 2013-01-02 00:32:39.630000+00:00 | 2013-06-23 11:16:37.737000+00:00 | 2013-06-23 11:16:37.737000+00:00 | null | 12,367,801 | <p>I need a working algorithm for finding all simple cycles in an undirected graph. I know the cost can be exponential and the problem is NP-complete, but I am going to use it in a small graph (up to 20-30 vertices) and the cycles are small in number.</p>
<p>After a long research (mainly here) I still don't have a working approach. Here is a summary of my search:</p>
<p><a href="https://stackoverflow.com/questions/5068086/finding-all-cycles-in-an-undirected-graph">Finding all cycles in an undirected graph</a></p>
<p><a href="https://stackoverflow.com/questions/526331/cycles-in-an-undirected-graph">Cycles in an Undirected Graph</a> -> detects only whether there is a cycle or not</p>
<p><a href="https://stackoverflow.com/questions/9804127/finding-polygons-within-an-undirected-graph">Finding polygons within an undirected Graph </a> -> very nice description, but no solution</p>
<p><a href="https://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402">Finding all cycles in a directed graph</a> -> finds cycles only in directed graphs</p>
<p><a href="https://stackoverflow.com/questions/9626249/detect-cycles-in-undirected-graph-using-boost-graph-library">Detect cycles in undirected graph using boost graph library</a></p>
<p>The only answer I found, which approaches my problem, is this one:</p>
<p><a href="https://stackoverflow.com/questions/2839908/find-all-cycles-in-graph-redux">Find all cycles in graph, redux</a></p>
<p>It seems that finding a basic set of cycles and XOR-ing them could do the trick. Finding a basic set of cycles is easy, but I don't understand how to combine them in order to obtain all cycles in the graph...</p> | 2012-09-11 10:34:38.887000+00:00 | 2020-07-12 07:56:14.703000+00:00 | 2017-05-23 11:47:05.623000+00:00 | graph|cycle | ['http://arxiv.org/pdf/1205.2766.pdf'] | 1 |
44,081,315 | <p>Although in the general case you cannot do better than O(log N), you can at least optimize that, thus significantly reducing the constant of proportionality in front of O(log N).</p>
<p>If you have to perform multiple search on the same array, these can be vectorized using SIMD extensions, thus further cutting down on computation cost.</p>
<p>In particular, if you are dealing with arrays of floating point numbers which satisfy certain properties, than there are ways to construct a special index which then allows to search the array in O(1).</p>
<p>All of the above aspects are discussed with test results in:
<a href="https://arxiv.org/abs/1506.08620" rel="nofollow noreferrer" title="Cannizzo, 2015, Fast and Vectorizable Alternative to Binary Search in O(1) Applicable to a Wide Domain of Sorted Arrays of Floating Point Numbers">Cannizzo, 2015, Fast and Vectorizable Alternative to Binary Search in O(1) Applicable to a Wide Domain of Sorted Arrays of Floating Point Numbers</a>
The paper comes with source code on <a href="https://github.com/fabiocannizzo/FastBinarySearch" rel="nofollow noreferrer">github</a>.</p> | 2017-05-20 02:24:38.817000+00:00 | 2017-05-23 23:34:15.303000+00:00 | 2017-05-23 23:34:15.303000+00:00 | null | 4,057,258 | <p>is there an algorithm that is faster than binary search, for searching in sorted values of array?</p>
<p>in my case, I have a sorted values (could be any type values) in an <code>A</code> array, I need to return <code>n</code> if the value I was looking is in range of <code>A[n] and A[n+1]</code></p> | 2010-10-30 04:33:53.090000+00:00 | 2022-08-23 09:27:25.267000+00:00 | 2018-02-13 11:52:03.553000+00:00 | c++|arrays|algorithm|search|binary-search | ['https://arxiv.org/abs/1506.08620', 'https://github.com/fabiocannizzo/FastBinarySearch'] | 2 |
70,522,252 | <p>According to <a href="https://arxiv.org/pdf/2109.08668.pdf" rel="nofollow noreferrer">this</a> paper, the dff is the <strong>feed forward upwards projection size</strong>.</p> | 2021-12-29 16:31:59.980000+00:00 | 2021-12-29 16:31:59.980000+00:00 | null | null | 69,110,073 | <p>In the paper, it describes the base model's network configuration are below:</p>
<p><a href="https://i.stack.imgur.com/0CfRQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0CfRQ.png" alt="enter image description here" /></a></p>
<pre><code>d_model: embedding size
h: attention head count
d_k: key matrix dimension
d_v: value matrix dimension
dff: 2048?
</code></pre>
<p>What's the dff?</p> | 2021-09-08 21:55:29.823000+00:00 | 2021-12-29 16:31:59.980000+00:00 | null | transformer-model | ['https://arxiv.org/pdf/2109.08668.pdf'] | 1 |
54,248,257 | <p>After a quick search I found this paper: <em>High-dimensional GARCH process segmentation
with an application to Value-at-Risk</em> by Haeran Cho and Karolos K. Korkas. Link to their paper <a href="https://arxiv.org/pdf/1706.01155.pdf" rel="nofollow noreferrer">here</a>.</p>
<p>In their abstract they reference their package, <code>segMGarch</code>, which is available on CRAN (Documentation can be found <a href="https://cran.r-project.org/web/packages/segMGarch/segMGarch.pdf" rel="nofollow noreferrer">here</a>).</p>
<p>I found this with very little effort. Consider looking into CRAN packages, other questions on StackOverflow (I found <a href="https://stackoverflow.com/questions/9969962/simulation-of-garch-in-r?rq=1">this one</a> in the related posts), or simply Googling it.</p> | 2019-01-18 05:38:53.427000+00:00 | 2019-01-18 05:38:53.427000+00:00 | null | null | 54,247,894 | <p>I'm looking for an R package and coding to simulate a multivariate Garch process with jump diffusion. Is there a package or R code I can replicate and modify?</p> | 2019-01-18 04:46:21.743000+00:00 | 2019-01-18 05:38:53.427000+00:00 | null | r | ['https://arxiv.org/pdf/1706.01155.pdf', 'https://cran.r-project.org/web/packages/segMGarch/segMGarch.pdf', 'https://stackoverflow.com/questions/9969962/simulation-of-garch-in-r?rq=1'] | 3 |
52,267,303 | <p>The main reason, in my opinion, is two-fold:</p>
<ol>
<li><p>Algorithm uses priority replay. This algorithm gives replay memories with higher temporal difference errors a higher probability of being selected, because it means the RL was not able to predict the correct Q-values given those states, so by picking these states more often, your model will train to do better on these states. But the problem is that these states are only a subset of your whole state space, and so your model will be biased towards this subset, and perform poorly for the remainder of your state space. This is especially a problem as you train your model longer, because only a small set of your states will have very large errors. To avoid this, you can anneal out the priority replay. Please see the original paper here: <a href="https://arxiv.org/abs/1511.05952" rel="noreferrer">https://arxiv.org/abs/1511.05952</a></p></li>
<li><p>You may also want to anneal out your learning rate, or increase the batch size as training goes on. These two are apparently equivalent according to a new paper published earlier this year at Google. <a href="https://openreview.net/forum?id=B1Yy1BxCZ" rel="noreferrer">https://openreview.net/forum?id=B1Yy1BxCZ</a>
This will allow your model to very slowly have a learning rate of 0 as training goes on and on, essentially stopping training after a while. Because if you never lower learning rate, an unlucky batch of bad data can potentially ruin the weights of your neural network.</p></li>
</ol> | 2018-09-11 00:55:24.417000+00:00 | 2018-09-11 00:55:24.417000+00:00 | null | null | 51,960,225 | <p>I am trying to implement a DQN algorithm that trains the agent to play Breakout from the Open AI Gym Atari Environment by giving the RAM state of the game at each time step as input. I used the code from the AI-Blog repository by jaara <a href="https://github.com/jaara/AI-blog/blob/master/Seaquest-DDQN-PER.py#L102" rel="nofollow noreferrer">https://github.com/jaara/AI-blog/blob/master/Seaquest-DDQN-PER.py#L102</a> and made some change to it. Here is the code:</p>
<pre><code>import random, numpy, math, gym
from SumTree import SumTree
import tensorflow as tf
import numpy as np
from tensorflow.keras import backend as K
import scipy.misc
# -----------------HYPER PARAMETERS--------------
# IMAGE_WIDTH = 84
# IMAGE_HEIGHT = 84
RAM_SIZE = 128
IMAGE_STACK = 2
HUBER_LOSS_DELTA = 2.0
LEARNING_RATE = 0.00025
MEMORY_CAPACITY = 200000
BATCH_SIZE = 32
GAMMA = 0.99
MAX_EPSILON = 1
MIN_EPSILON = 0.1
EXPLORATION_STOP = 500000 # at this step epsilon will be 0.01
LAMBDA = - math.log(0.01) / EXPLORATION_STOP # speed of decay
UPDATE_TARGET_FREQUENCY = 10000
#-------------------- UTILITIES -----------------------
def huber_loss(y_true, y_pred):
err = y_true - y_pred
cond = K.abs(err) < HUBER_LOSS_DELTA
L2 = 0.5 * K.square(err)
L1 = HUBER_LOSS_DELTA * (K.abs(err) - 0.5 * HUBER_LOSS_DELTA)
loss = tf.where(cond, L2, L1) # Keras does not cover where function in tensorflow :-(
return K.mean(loss)
# def processImage( ram ):
# rgb = scipy.misc.imresize(ram, (IMAGE_WIDTH, IMAGE_HEIGHT), interp='bilinear')
#
# r, g, b = rgb[:,:,0], rgb[:,:,1], rgb[:,:,2]
# gray = 0.2989 * r + 0.5870 * g + 0.1140 * b # extract luminance
#
# o = gray.astype('float32') / 128 - 1 # normalize
# return o
def save_model(agent, problem, algorithm_name=None):
file_name = ("saved_models\\"
+ problem +
"-" + datetime.datetime.now().strftime("%Y-%m-%d-%H-%M"))
if algorithm_name:
file_name += "-" + algorithm_name + ".h5"
else:
file_name += ".h5"
agent.brain.model.save(file_name)
#-------------------- BRAIN ---------------------------
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import *
from tensorflow.keras.optimizers import *
class Brain:
def __init__(self, stateCnt, actionCnt, load_file=None):
self.stateCnt = stateCnt
self.actionCnt = actionCnt
self.history = None
self.model = self._createModel()
self.model_ = self._createModel() # target network
if load_file:
self.model.load_weights(load_file)
self.model.load_weights(load_file)
def _createModel(self):
model = Sequential()
model.add(Dense(units=128, activation="relu", input_dim=self.stateCnt))
model.add(Dense(units=self.actionCnt, activation='linear', input_dim=128))
opt = RMSprop(lr=LEARNING_RATE)
model.compile(loss=huber_loss, optimizer=opt)
return model
def train(self, x, y, epochs=1, verbose=0):
self.history = self.model.fit(x, y, batch_size=32, epochs=epochs, verbose=verbose)
# print(history.history["val_loss"])
def predict(self, s, target=False):
if target:
return self.model_.predict(s)
else:
return self.model.predict(s)
def predictOne(self, s, target=False):
return self.predict(s.reshape(1, IMAGE_STACK*RAM_SIZE), target).flatten()
def updateTargetModel(self):
self.model_.set_weights(self.model.get_weights())
#-------------------- MEMORY --------------------------
class Memory: # stored as ( s, a, r, s_ ) in SumTree
e = 0.01 # epsilon, prevent error from falling below 0
a = 0.6 # alpha, the degree of bias, with 0 meaning no bias at all
def __init__(self, capacity):
self.tree = SumTree(capacity)
def _getPriority(self, error):
return (error + self.e) ** self.a
def add(self, error, sample):
p = self._getPriority(error)
self.tree.add(p, sample)
def sample(self, n):
batch = []
segment = self.tree.total() / n
for i in range(n):
a = segment * i
b = segment * (i + 1)
s = random.uniform(a, b)
(idx, p, data) = self.tree.get(s)
batch.append((idx, data))
return batch
def update(self, idx, error):
"""
Update the priority value of given entry
:param idx: The index of the given entry
:param error: The error value to be updated.
:return: None
"""
p = self._getPriority(error)
self.tree.update(idx, p)
#-------------------- AGENT ---------------------------
class Agent:
steps = 0
epsilon = MAX_EPSILON
def __init__(self, stateCnt, actionCnt, file=None):
"""
Initialize an agent, specifying the shape of the states and number of actions
:param (int, int) stateCnt: (x, y) tuple specifying the shape of the state
x: the number of arguments in a state e.g. size of the ram
y: number of frames seen by the agent
:param actionCnt: The number of actions this agent can do
:param file: The model (e.g: .h5) file that's being loaded into the agents' brain.
"""
self.stateCnt = stateCnt
self.actionCnt = actionCnt
self.brain = Brain(stateCnt, actionCnt, file)
self.memory = Memory(MEMORY_CAPACITY)
def act(self, s):
"""
Do an action according to the current state
:param numpyArray s: the current state.
:return: int: the action that's being done
"""
if random.random() < self.epsilon:
return random.randint(0, self.actionCnt-1)
else:
return numpy.argmax(self.brain.predictOne(s))
def observe(self, sample): # in (s, a, r, s_) format
"""
Add a sample to its memory
:param tuple sample: the (s, a, r, s_) sample to be added. s, s_ are array of size STACK_SIZE
:return: None
"""
x, y, errors = self._getTargets([(0, sample)])
self.memory.add(errors, sample)
if self.steps % UPDATE_TARGET_FREQUENCY == 0:
self.brain.updateTargetModel()
# slowly decrease Epsilon based on our eperience
self.steps += 1
self.epsilon = MIN_EPSILON + (MAX_EPSILON - MIN_EPSILON) * math.exp(-LAMBDA * self.steps)
def _getTargets(self, batch):
"""
Get the list of estimated and target Q values for a given batch )
:param list batch: The given [(error, (s, a, s', r))] samples
:return: tuple (list[float], list[float], list[float]): Return three values: x, y, error
x: list of estimated Q(s, a) value
y: list of estimated target Q(s, a) value, which is r + gamma*maxQ_(s, a)
error: list of MSE between x and y.
"""
no_state = numpy.zeros(self.stateCnt)
states = numpy.array([ sample[1][0] for sample in batch ])
states_ = numpy.array([ (no_state if sample[1][3] is None else sample[1][3]) for sample in batch ])
p = agent.brain.predict(states) # estimated Q values for each sample in the batch
p_ = agent.brain.predict(states_, target=False)
pTarget_ = agent.brain.predict(states_, target=True)
x = numpy.zeros((len(batch), IMAGE_STACK*RAM_SIZE))
y = numpy.zeros((len(batch), self.actionCnt))
errors = numpy.zeros(len(batch))
for i in range(len(batch)):
sample = batch[i][1] # the i is the index, 1 is the actual sample
s = sample[0]; a = sample[1]; r = sample[2]; s_ = sample[3]
target = p[i] # target Q value for the i-th state
oldVal = target[a]
if s_ is None:
target[a] = r
else:
target[a] = r + GAMMA * pTarget_[i][ numpy.argmax(p_[i]) ] # double DQN
x[i] = s
y[i] = target
errors[i] = abs(oldVal - target[a])
return (x, y, errors)
def replay(self):
"""
Take a batch from the agent's memory, get the x and y data and train it in the brain.
Also update the error values (priorities) of the entries in the batch.
:return: None
"""
batch = self.memory.sample(BATCH_SIZE)
x, y, errors = self._getTargets(batch)
# update errors
for i in range(len(batch)):
idx = batch[i][0]
self.memory.update(idx, errors[i])
self.brain.train(x, y)
class RandomAgent:
memory = Memory(MEMORY_CAPACITY)
exp = 0
def __init__(self, actionCnt):
self.actionCnt = actionCnt
def act(self, s):
return random.randint(0, self.actionCnt-1)
def observe(self, sample):
"""
Add a sample to its memory
:param 4-tuple sample: the (s, a, r, s_) sample to be added
:return: None
"""
# in (s, a, r, s_) format
error = abs(sample[2]) # reward
self.memory.add(error, sample)
self.exp += 1
def replay(self):
pass
#-------------------- ENVIRONMENT ---------------------
class Environment:
def __init__(self, problem):
self.problem = problem
self.env = gym.make(problem)
self.frames = 0
self.episodes = 0
self.R_40epi = 0
def run(self, agent):
ram = self.env.reset()
# w = processImage(ram)
s = numpy.concatenate((ram, numpy.zeros(128*(IMAGE_STACK-1))))
R = 0
last_action = 0
while True:
self.env.render()
self.frames += 1
# Frame skipping
# if self.frames % IMAGE_STACK == 0:
a = agent.act(s)
# last_action = a
# else:
# a = last_action
r = 0
ram, r, done, info = self.env.step(a)
s_ = numpy.concatenate((s[128:128*IMAGE_STACK], ram)) # last two screens
r = np.clip(r, -1, 1) # clip reward to [-1, 1]
if done: # terminal state
s_ = None
agent.observe( (s, a, r, s_) )
agent.replay()
s = s_
R += r
if done:
self.R_40epi += R
break
info = ("Total reward: " + str(R) + " " +
"Episode:" + str(self.episodes) + " " +
"Frames:" + str(self.frames) + " " +
datetime.datetime.now().strftime("%Y-%m-%d-%H:%M:%S"))
if not type(agent) is RandomAgent and agent.brain.history is not None:
info = (info + " loss: " + str(agent.brain.history.history["loss"]))
print(info)
if self.episodes % 40 == 0:
print("average in last 40 episodes:", self.R_40epi/40)
self.R_40epi = 0
self.episodes += 1
# save every 30 min
if datetime.datetime.now().strftime("%M") == "00" and type(agent) is not RandomAgent:
save_model(agent, self.problem, "ddqn-ram")
#-------------------- MAIN ----------------------------
import datetime
import sys
PROBLEM = 'Breakout-ram-v0'
env = Environment(PROBLEM)
# file = "saved_models\Breakout-ram-v0-2018-08-17-16-46-ddqn-ram.h5"
stateCnt = IMAGE_STACK*RAM_SIZE
actionCnt = env.env.action_space.n
agent = Agent(stateCnt, actionCnt)
randomAgent = RandomAgent(actionCnt)
try:
print("Initialization with random agent...")
while randomAgent.exp < MEMORY_CAPACITY:
env.run(randomAgent)
print(randomAgent.exp, "/", MEMORY_CAPACITY)
agent.memory = randomAgent.memory
randomAgent = None
print("Starting learning")
env.frames = 0
env.episodes = 0
# S = env.env.step(env.env.action_space.sample)[0]
while True:
env.run(agent)
finally:
save_model(agent, PROBLEM, "ddqn-ram-single128")
</code></pre>
<p>The problem I encountered is that when I tried to train the agent with this code, The average reward that the agent receives per episode was increasing first, but once the reward reaches around 3 to 4 (which happens at around 1 million timestep), it begin to decrease and stabilize at 1, never increase again no matter how much longer I train it (most algorithms reaches a reward of 60 to 100). The difference between the original code and my modified version is that I use the RAM state of the game as states instead of pictures, I use only a single 128 node dense hidden layer, and I am playing Breakout instead of Sea Quest, which is what the original code is playing. The code also have double DQN, reward clipping and prioritized experience replay implemented. What could possibly be the cause of the problem? Does reading RAM instead of game frames cause the problem?</p>
<p>For reference, this is the implementation of the "Sum Tree" data structure I used:
import numpy</p>
<pre><code>class SumTree:
def __init__(self, capacity):
"""
Initialize a sum tree structure
:param capacity: the number of values the tree can store
"""
self.capacity = capacity
self.tree = numpy.zeros( 2*capacity - 1 ) # the numpy array representing the actual tree
self.data = numpy.zeros( capacity, dtype=object ) # the array representing the data (leaf) of the tree
self.write = 0
def _propagate(self, idx, change):
parent = (idx - 1) // 2
self.tree[parent] += change
if parent != 0:
self._propagate(parent, change)
def _retrieve(self, idx, s):
left = 2 * idx + 1
right = left + 1
if left >= len(self.tree):
return idx
if s <= self.tree[left]:
return self._retrieve(left, s)
else:
return self._retrieve(right, s-self.tree[left])
def total(self):
return self.tree[0]
def add(self, p, data):
idx = self.write + self.capacity - 1
self.data[self.write] = data
self.update(idx, p)
self.write += 1
if self.write >= self.capacity:
self.write = 0
def update(self, idx, p):
change = p - self.tree[idx]
self.tree[idx] = p
self._propagate(idx, change)
def get(self, s):
idx = self._retrieve(0, s)
dataIdx = idx - self.capacity + 1
return (idx, self.tree[idx], self.data[dataIdx])
</code></pre> | 2018-08-22 05:11:59.173000+00:00 | 2018-09-11 00:55:24.417000+00:00 | 2018-08-22 05:17:33.707000+00:00 | python|machine-learning|neural-network|deep-learning|reinforcement-learning | ['https://arxiv.org/abs/1511.05952', 'https://openreview.net/forum?id=B1Yy1BxCZ'] | 2 |
40,427,235 | <p>You should take a look at this work : <a href="https://github.com/Russell91/TensorBox" rel="nofollow noreferrer">https://github.com/Russell91/TensorBox</a> and the associated <a href="https://arxiv.org/pdf/1506.04878v3.pdf" rel="nofollow noreferrer">paper</a>.</p> | 2016-11-04 16:11:27.690000+00:00 | 2016-11-04 16:11:27.690000+00:00 | null | null | 40,425,184 | <p>I'm doing a project with Tensorflow which consist in analyzing UML diagrams drawn on a whiteboard or tablet devices to get in the end a file with the correct UML diagram, usable with softwares. The system will also use Machine learning (explaining why we choose Tensorflow).</p>
<p>As the project goes by with researches, my partner and I have been facing a problem : we don't know how to detect object positions in a picture with Tensorflow. We made some researches and found some articles talking about it, but no real conclusion available. We eventually met <a href="https://stackoverflow.com/questions/34406792/how-to-to-find-location-roi-of-a-recognized-object-in-tensorflow">this</a> but we're left with no real tracks on what to do. </p>
<p>Our real question is more about : is there anything new since that (because Tensorflow is upgrading pretty fast in my opinion)? Could we have some articles/hints on what to do then?
Thanks in advance.</p> | 2016-11-04 14:28:56.890000+00:00 | 2016-11-04 16:11:27.690000+00:00 | 2017-05-23 11:45:28.967000+00:00 | python|image|tensorflow|analysis | ['https://github.com/Russell91/TensorBox', 'https://arxiv.org/pdf/1506.04878v3.pdf'] | 2 |
45,389,552 | <p>In some cases, resizing the images appropriately (for example to keep the aspectratio) will be sufficient. But, this can introduce distortion, and in case this is harmful, another solution is to use Spatial Pyramidal Pooling (SPP). The problem with different image sizes is that it produces layers of different sizes, for example, taking the features of the <code>n-th</code> layer of some network, you can end up with a featuremap of size <code>128*fw*fh</code> where <code>fw</code> and <code>fh</code> vary depending on the size of the input example. What SPP does in order to alleviate this problem, is to turn this variable size feature map into a fix-length vector of features. It operates on different scales, by dividing the image into equal patches and performing maxpooling on them. I think <a href="https://arxiv.org/pdf/1406.4729.pdf" rel="nofollow noreferrer">this paper</a> does a great job at explaining it. An example application can be seen <a href="https://arxiv.org/pdf/1702.01381.pdf" rel="nofollow noreferrer">here</a>.</p>
<p>As a quick explanation, imagine you have a feature map of size <code>k*fw*fh</code>. You can consider it as <code>k</code> maps of the form </p>
<pre><code>X Y
Z T
</code></pre>
<p>where each of the blocks are of size <code>fw/2*fh/2</code>. Now, performing maxpooling on each of those blocks separately gives you a vector of size <code>4</code>, and therefore, you can grossly describe the <code>k*fw*fh</code> map as a <code>k*4</code> fixed-size vector of features. </p>
<p>Now, call this fixed-size vector <code>w</code> and set it aside, and this time, consider the <code>k*fw*fh</code> featuremap as <code>k</code> featureplanes written as</p>
<pre><code> A B C D
E F G H
I J K L
M N O P
</code></pre>
<p>and again, perform maxpooling separately on each block. So, using this, you obtain a more fine-grained representation, as a vector of length <code>v=k*16</code>.</p>
<p>Now, concatenating the two vectors <code>u=[v;w]</code> gives you a fixed-size representation. This is exaclty what a 2-scale SPP does (well, of course you can change the number/sizes of divisions). </p>
<p>Hope this helps.</p> | 2017-07-29 12:56:33.817000+00:00 | 2017-07-29 12:56:33.817000+00:00 | null | null | 45,389,303 | <p>So, I've seen that many of the first CNN examples in Machine Learning use the MNIST dataset. Each image there is 28x28, and so we know the shape of the input before hand. How would this be done for variable size input, let's say you have some images that are 56x56 and some 28x28.</p>
<p>I'm looking for a language and framework agnostic answer if possible or in tensorflow terms preferable</p> | 2017-07-29 12:28:54.820000+00:00 | 2017-07-29 18:01:03.363000+00:00 | null | machine-learning|tensorflow | ['https://arxiv.org/pdf/1406.4729.pdf', 'https://arxiv.org/pdf/1702.01381.pdf'] | 2 |
45,389,506 | <p>When you use CNN for classification task, your network has two part:</p>
<ol>
<li><p><em>Feature generator</em>. Part generates feature map with size <code>WF x HF</code> and <code>CF</code> channels by image with size <code>WI x HI</code> and <code>CI</code> channels . The relation between image sizes and feature map size depends of structure your NN (for example, on amount of pooling layers and stride of them).</p></li>
<li><p><em>Classifier</em>. Part solves the task of classification vectors with <code>WF*HF*CF</code> components into classes.</p></li>
</ol>
<p>You can put image with different size into <em>feature generator</em>, and get feature map with different sizes. But classifier can only be training on some fixed lengths vectors. Therefore you obviously train your network for some fixed sizes of images. If you have images with different size you resize it to input size of network, or crop some part of image.</p>
<p>Another way described in the article </p>
<p><em>K. He, X. Zhang, S. Ren, J. Sun, "Spatial pyramid pooling in deep convolutional networks for visual recognition," <a href="https://arxiv.org/abs/1406.4729" rel="nofollow noreferrer">arXiv:1406.4729</a> 2014</em></p>
<p>Authors offered <strong>Spatial pyramid pooling</strong>, which solve the problem with different image on the input of CNN. But I don't sure is spatial pyramid pooling layer exists in tensorflow.</p> | 2017-07-29 12:51:56.383000+00:00 | 2017-07-29 12:51:56.383000+00:00 | null | null | 45,389,303 | <p>So, I've seen that many of the first CNN examples in Machine Learning use the MNIST dataset. Each image there is 28x28, and so we know the shape of the input before hand. How would this be done for variable size input, let's say you have some images that are 56x56 and some 28x28.</p>
<p>I'm looking for a language and framework agnostic answer if possible or in tensorflow terms preferable</p> | 2017-07-29 12:28:54.820000+00:00 | 2017-07-29 18:01:03.363000+00:00 | null | machine-learning|tensorflow | ['https://arxiv.org/abs/1406.4729'] | 1 |
56,924,915 | <p>I think this following image from <a href="https://arxiv.org/pdf/1812.11794.pdf" rel="nofollow noreferrer">Deep Reinforcement Learning for Multi-Agent Systems: A Review of Challenges, Solutions and Applications</a> answers your question:
<a href="https://i.stack.imgur.com/vdKjH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vdKjH.jpg" alt="enter image description here"></a></p> | 2019-07-07 18:22:16.020000+00:00 | 2019-07-07 18:22:16.020000+00:00 | null | null | 56,868,307 | <p>I am new to reinforcement learning and I read about these two algorithms Actor Critic and DDQN. I found that both of these gives fairly good results. But because two algos are totally different so I want to know that where I should prefer actor critic and where DDQN should be preferred. Also what are the advantages and disadvantages of actor critic over DDQN.</p> | 2019-07-03 10:42:26.583000+00:00 | 2019-07-07 18:22:16.020000+00:00 | 2019-07-03 21:08:54.563000+00:00 | machine-learning|reinforcement-learning | ['https://arxiv.org/pdf/1812.11794.pdf', 'https://i.stack.imgur.com/vdKjH.jpg'] | 2 |
53,738,549 | <p>On this <a href="https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-algorithms-36d53571365e" rel="nofollow noreferrer">URL</a> you can see 84 hours of training for RCNN. You can find more details on RCNN in this <a href="https://arxiv.org/pdf/1311.2524.pdf" rel="nofollow noreferrer">paper</a>.</p> | 2018-12-12 08:04:47.373000+00:00 | 2018-12-12 08:04:47.373000+00:00 | null | null | 53,734,931 | <p>I am now trying to train faster RCNN on COCO or VOCs.</p>
<p>But I'm having hard time to get a good result.</p>
<p>In the paper, how many epochs did they train for them?
I trained a few epochs and still loss is reducing slowly but I paused it because it takes too much time.</p>
<p>And, any preprocess they didn't mention on the paper?</p> | 2018-12-12 01:46:48.450000+00:00 | 2018-12-12 08:04:47.373000+00:00 | 2018-12-12 05:56:22.230000+00:00 | tensorflow|deep-learning|object-detection | ['https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-algorithms-36d53571365e', 'https://arxiv.org/pdf/1311.2524.pdf'] | 2 |
63,600,954 | <p>I've been working with StyleGAN for a while and I couldn't guess the reason with such little information..</p>
<p>One possible reason is the effect of the truncation trick, this makes the results to represent an average face but with higher quality or deviate it to obtain results variability but with possibility of added artefacts as yours. Check how you implemented this trick in Pytorch.</p>
<p>I recommend you to check this repository (<a href="https://github.com/rosinality/style-based-gan-pytorch" rel="nofollow noreferrer">https://github.com/rosinality/style-based-gan-pytorch</a>) where they implemented styleGAN in Pytorch. You could find if you are missing something from the model here.</p>
<p>Finally I would also suggest you to read StyleGAN2 paper (<a href="https://arxiv.org/abs/1912.04958" rel="nofollow noreferrer">https://arxiv.org/abs/1912.04958</a>) from the same authors where they explain how they solve a droplet artifacts and improve quality results from StyleGAN.</p> | 2020-08-26 15:33:42.353000+00:00 | 2020-08-26 15:33:42.353000+00:00 | null | null | 63,594,267 | <p>I've written my own implementation of StyleGAN (paper here <a href="https://arxiv.org/abs/1812.04948" rel="nofollow noreferrer">https://arxiv.org/abs/1812.04948</a>), using PyTorch instead of Tensorflow, which is what the official implementation uses. I'm doing this partly as an exercise in implementing a scientific paper from scratch.</p>
<p>I have done my best to reproduce all the features mentioned in the paper and in the ProgressiveGAN paper which it is based on, and the network trains, but I consistently get blurry images and blob-shaped artifacts:</p>
<p><a href="https://i.stack.imgur.com/a0HbI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/a0HbI.png" alt="Example 1" /></a>
<a href="https://i.stack.imgur.com/kv4Wx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kv4Wx.png" alt="Example 2" /></a></p>
<p>I would very much like to know if anyone with experience of GANs in general or StyleGAN in particular has seen this phenomenon and can give me any insight into possible reasons for it.</p>
<p>(Some detail: I'm training on downsampled CelebA images, 600k images burn-in, 600k images fade-in, but I see very similar phenomena with a tiny toy dataset and a lot fewer iterations.)</p> | 2020-08-26 09:08:02.763000+00:00 | 2020-08-26 15:33:42.353000+00:00 | null | neural-network|pytorch|generative-adversarial-network|stylegan | ['https://github.com/rosinality/style-based-gan-pytorch', 'https://arxiv.org/abs/1912.04958'] | 2 |
53,796,253 | <blockquote>
<ol>
<li>Is my analysis correct? </li>
</ol>
</blockquote>
<p>Given my remarks in the comments that your network is certainly not <em>deep</em>, let's accept that your analysis is indeed correct (after all, your model does seem to do a good job <em>inside its training scope</em>), in order to get to your 2nd question, which is the interesting one.</p>
<blockquote>
<ol start="2">
<li>If the answer to 1 is yes, then isn't the prediction scope of deep learning very limited?</li>
</ol>
</blockquote>
<p>Well, this is the kind of questions not exactly suitable for SO, since the exact meaning of "very limited" is arguably unclear...</p>
<p>So, let's try to rephrase it: should we expect DL models to predict such numerical functions <em>outside</em> the numeric domain on which they have been trained?</p>
<p>An example from a different domain may be enlightening here: suppose we have built a model able to detect & recognize animals in photos with very high accuracy (it is not hypothetical; such models do exist indeed); should we complain when the very same model cannot detect and recognize airplanes (or trees, refrigerators etc - you name it) in these same photos?</p>
<p>Put like that, the answer is a clear & obvious <strong>no</strong> - we should not complain, and in fact we are certainly not even surprised by such a behavior in the first place.</p>
<p>It is tempting for us humans to think that such models should be able to <em>extrapolate</em>, especially in the numeric domain, since this is something we do very "easily" ourselves; but ML models, while exceptionally good at <em>interpolating</em>, they fail miserably in extrapolation tasks, such as the one you present here.</p>
<p>Trying to make it more intuitive, think that the whole "world" of such models is confined in the <em>domain</em> of their training sets: my example model above would be able to generalize and recognize animals in unseen photos as long as these animals are "between" (mind the quotes) the ones it has seen during training; in a similar manner, your model does a good job predicting the function value for arguments <em>between</em> the sample you have used for training. But in neither case these models are expected to go beyond their training domain (i.e. extrapolate). There is no "world" for my example model beyond animals, and similarly for your model beyond [-500, 500]...</p>
<p>For corroboration, consider the very recent paper <a href="https://arxiv.org/abs/1808.00508" rel="noreferrer">Neural Arithmetic Logic Units</a>, by DeepMind; quoting from the abstract:</p>
<blockquote>
<p>Neural networks can learn to represent and manipulate numerical information, but they seldom generalize well outside of the range of numerical values encountered during training.</p>
</blockquote>
<p>See also a <a href="https://twitter.com/reza_zadeh/status/1030331049073565697?s=11" rel="noreferrer">relevant tweet</a> of a prominent practitioner:</p>
<p><a href="https://i.stack.imgur.com/aygJG.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/aygJG.jpg" alt="enter image description here"></a></p>
<p>On to your third question:</p>
<blockquote>
<ol start="3">
<li>Is there a better algorithm for predicting functions like <code>y = x**2</code> both inside and outside the scope of training data?</li>
</ol>
</blockquote>
<p>As it should be clear by now, this is a (hot) area of current research; see the above paper for starters...</p>
<hr>
<p>So, are DL models limited? Definitely - forget the scary tales about AGI for the foreseeable future. Are they <em>very</em> limited, as you put it? Well, I don't know... But, given their limitation in extrapolating, are they <em>useful</em>?</p>
<p>This is arguably the real question of interest, and the answer is obviously - <em>hell, yeah</em>!</p> | 2018-12-15 18:51:23.783000+00:00 | 2019-02-01 14:44:09.883000+00:00 | 2019-02-01 14:44:09.883000+00:00 | null | 53,795,142 | <p>I am trying to create a simple deep-learning based model to predict <code>y=x**2</code>
But looks like deep learning is not able to learn the general function <strong>outside the scope of its training set</strong>.</p>
<p>Intuitively I can think that neural network might not be able to fit y=x**2 as there is no multiplication involved between the inputs.</p>
<p>Please note I am not asking how to create a model to fit <code>x**2</code>. I have already achieved that. I want to know the answers to following questions:</p>
<ol>
<li>Is my analysis correct? </li>
<li>If the answer to 1 is yes, then isn't the prediction scope of deep learning very limited?</li>
<li>Is there a better algorithm for predicting functions like y = x**2 both inside and outside the scope of training data?</li>
</ol>
<p>Path to complete notebook:
<a href="https://github.com/krishansubudhi/MyPracticeProjects/blob/master/KerasBasic-nonlinear.ipynb" rel="nofollow noreferrer">https://github.com/krishansubudhi/MyPracticeProjects/blob/master/KerasBasic-nonlinear.ipynb</a></p>
<p><strong>training input</strong>:</p>
<pre class="lang-py prettyprint-override"><code>x = np.random.random((10000,1))*1000-500
y = x**2
x_train= x
</code></pre>
<p><a href="https://i.stack.imgur.com/Npyf5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Npyf5.png" alt="input data"></a></p>
<p><strong>training code</strong></p>
<pre><code>def getSequentialModel():
model = Sequential()
model.add(layers.Dense(8, kernel_regularizer=regularizers.l2(0.001), activation='relu', input_shape = (1,)))
model.add(layers.Dense(1))
print(model.summary())
return model
def runmodel(model):
model.compile(optimizer=optimizers.rmsprop(lr=0.01),loss='mse')
from keras.callbacks import EarlyStopping
early_stopping_monitor = EarlyStopping(patience=5)
h = model.fit(x_train,y,validation_split=0.2,
epochs= 300,
batch_size=32,
verbose=False,
callbacks=[early_stopping_monitor])
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_18 (Dense) (None, 8) 16
_________________________________________________________________
dense_19 (Dense) (None, 1) 9
=================================================================
Total params: 25
Trainable params: 25
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<p><strong>Evaluation on random test set</strong></p>
<p><a href="https://i.stack.imgur.com/i3Rxq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i3Rxq.png" alt="enter image description here"></a></p>
<p>Deep learning in this example is not good at predicting a simple non linear function. But good at predicting values in the sample space of training data.</p> | 2018-12-15 16:37:59.173000+00:00 | 2019-07-10 13:34:32.943000+00:00 | 2019-07-10 13:34:32.943000+00:00 | machine-learning|keras|neural-network|deep-learning|non-linear-regression | ['https://arxiv.org/abs/1808.00508', 'https://twitter.com/reza_zadeh/status/1030331049073565697?s=11', 'https://i.stack.imgur.com/aygJG.jpg'] | 3 |
61,608,224 | <p>Maybe helpful: Take a look into <a href="https://arxiv.org/abs/1402.0939" rel="nofollow noreferrer">https://arxiv.org/abs/1402.0939</a> which is a description of an efficient framework for the problem of contracting so called tensor networks in a single function <code>ncon(...)</code>. As far as I see implementations of it are directly available for Matlab (can be found within in the link) and for Python3 (<a href="https://github.com/mhauru/ncon" rel="nofollow noreferrer">https://github.com/mhauru/ncon</a>).</p> | 2020-05-05 07:48:58.503000+00:00 | 2020-05-05 07:48:58.503000+00:00 | null | null | 42,034,480 | <p>I have a list <code>L</code> of tensors (<code>ndarray</code> objects), with several indices each. I need to contract these indices according to a graph of connections. </p>
<p>The connections are encoded in a list of tuples in the form <code>((m,i),(n,j))</code> signifying "contract the <em>i</em>-th index of the tensor <code>L[m]</code> with the <em>j</em>-th index of the tensor <code>L[n]</code>.</p>
<p>How can I handle non-trivial connectivity graphs? The first problem is that as soon as I contract a pair of indices, the result is a new tensor that does not belong to the list <code>L</code>. But even if I solved this (e.g. by giving a unique identifier to all the indices of all the tensors), there is the issue that one can pick any order to perform the contractions, and some choices yield unnecessarily enormous beasts in mid-computation (even if the final result is small). Suggestions?</p> | 2017-02-03 23:11:34.577000+00:00 | 2020-05-05 07:48:58.503000+00:00 | 2017-02-04 00:12:23.790000+00:00 | python|numpy|vectorization|numpy-einsum | ['https://arxiv.org/abs/1402.0939', 'https://github.com/mhauru/ncon'] | 2 |
71,051,269 | <p>Have a look at the <a href="https://selectorgadget.com/" rel="nofollow noreferrer">SelectorGadget</a> Chrome extension to grab <code>CSS</code> selectors by clicking on the desired element in your browser.</p>
<p><a href="https://replit.com/@DimitryZub1/why-am-i-getting-repetitive-output-while-tryin#main.py" rel="nofollow noreferrer">Code and example in the online IDE</a> to extract PDF's:</p>
<pre class="lang-py prettyprint-override"><code>from bs4 import BeautifulSoup
import requests, lxml
params = {
"q": "entity resolution", # search query
"hl": "en" # language
}
# https://requests.readthedocs.io/en/master/user/quickstart/#custom-headers
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3538.102 Safari/537.36 Edge/18.19582",
}
html = requests.get("https://scholar.google.com/scholar", params=params, headers=headers, timeout=30)
soup = BeautifulSoup(html.text, "lxml")
for pdf_link in soup.select(".gs_or_ggsm a"):
pdf_file_link = pdf_link["href"]
print(pdf_file_link)
# output from the first page:
'''
https://linqs.github.io/linqs-website/assets/resources/getoor-vldb12-slides.pdf
http://ilpubs.stanford.edu:8090/859/1/2008-7.pdf
https://drum.lib.umd.edu/bitstream/handle/1903/4241/umi-umd-4070.pdf;sequence=1
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.169.9535&rep=rep1&type=pdf
https://arxiv.org/pdf/1208.1927
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.77.6875&rep=rep1&type=pdf
http://da.qcri.org/ntang/pubs/vldb18-deeper.pdf
'''
</code></pre>
<hr />
<p>Alternatively, you can achieve the same thing by using <a href="https://serpapi.com/google-scholar-organic-results" rel="nofollow noreferrer">Google Scholar Organic Results API</a> from SerpApi. It's a paid API with a free plan.</p>
<p>The main difference is that you only need to grab the data from structured JSON instead of figuring out how to extract the data from HTML, how to bypass blocks from search engines.</p>
<p>Code to integrate:</p>
<pre class="lang-py prettyprint-override"><code>from serpapi import GoogleSearch
params = {
"api_key": "YOUR_API_KEY", # SerpApi API key
"engine": "google_scholar", # Google Scholar organic reuslts
"q": "entity resolution", # search query
"hl": "en" # language
}
search = GoogleSearch(params)
results = search.get_dict()
for pdfs in results["organic_results"]:
for link in pdfs.get("resources", []):
pdf_link = link["link"]
print(pdf_link)
# output:
'''
https://linqs.github.io/linqs-website/assets/resources/getoor-vldb12-slides.pdf
http://ilpubs.stanford.edu:8090/859/1/2008-7.pdf
https://drum.lib.umd.edu/bitstream/handle/1903/4241/umi-umd-4070.pdf;sequence=1
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.169.9535&rep=rep1&type=pdf
https://arxiv.org/pdf/1208.1927
https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.77.6875&rep=rep1&type=pdf
http://da.qcri.org/ntang/pubs/vldb18-deeper.pdf
'''
</code></pre>
<hr />
<p>If you want to scrape more data from organic results, there's a dedicated <a href="https://dev.to/dmitryzub/scrape-google-scholar-with-python-32oh#organic_search" rel="nofollow noreferrer">Scrape Google Scholar with Python</a> blog post of mine.</p>
<blockquote>
<p>Disclaimer, I work for SerpApi.</p>
</blockquote> | 2022-02-09 14:23:16.730000+00:00 | 2022-02-09 14:23:16.730000+00:00 | null | null | 19,722,340 | <p>I am trying to scrape the PDF links from the search results from Google Scholar. I have tried to set a page counter based on the change in URL, but after the first eight output links, I am getting repetitive links as output.</p>
<pre><code>#!/usr/bin/env python
from mechanize import Browser
from BeautifulSoup import BeautifulSoup
from bs4 import BeautifulSoup
import urllib2
import requests
#modifying the url as per page
urlCounter = 0
while urlCounter <=30:
urlPart1 = "http://scholar.google.com/scholar?start="
urlPart2 = "&q=%22entity+resolution%22&hl=en&as_sdt=0,4"
url = urlPart1 + str(urlCounter) + urlPart2
page = urllib2.Request(url,None,{"User-Agent":"Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11"})
resp = urllib2.urlopen(page)
html = resp.read()
soup = BeautifulSoup(html)
urlCounter = urlCounter + 10
recordCount = 0
while recordCount <=9:
recordPart1 = "gs_ggsW"
finRecord = recordPart1 + str(recordCount)
recordCount = recordCount+1
#printing the links
for link in soup.find_all('div', id = finRecord):
linkstring = str(link)
soup1 = BeautifulSoup(linkstring)
for link in soup1.find_all('a'):
print(link.get('href'))
</code></pre> | 2013-11-01 07:11:29.400000+00:00 | 2022-02-09 14:23:16.730000+00:00 | 2014-02-06 16:54:19.660000+00:00 | python|web-scraping|urllib2|google-scholar | ['https://selectorgadget.com/', 'https://replit.com/@DimitryZub1/why-am-i-getting-repetitive-output-while-tryin#main.py', 'https://serpapi.com/google-scholar-organic-results', 'https://dev.to/dmitryzub/scrape-google-scholar-with-python-32oh#organic_search'] | 4 |
7,194,357 | <p>There is this paper by Bertot</p>
<p><a href="http://arxiv.org/abs/0810.2179" rel="nofollow">Structural abstract interpretation, A formal study using Coq</a> </p>
<p>That gives a full implementation of an abstract interpreter for a simple toy language using the Coq Proof Assistant. I used this for a concrete reference, and found it useful, although a little hard going, which is to be expected given the subject matter. Coq is a great little piece of software.</p>
<p>I also came across in a Cousot paper:</p>
<p><a href="http://www.di.ens.fr/~cousot/COUSOTpapers/MARKTOBERDORF-09.shtml" rel="nofollow">A gentle introduction to formal verification of computer systems by abstract interpretation</a></p>
<p>rough details (but I am sure there will be useful citations for full details) of an implementation in Astrée, I am not familiar with Astrée, so didn't actually read that section, but I think it meets your criteria. </p>
<p>If you come across anymore, please let me know! Would especially like to see a prolog abstract interpreter.</p> | 2011-08-25 17:03:12.493000+00:00 | 2011-08-25 17:03:12.493000+00:00 | null | null | 2,928,514 | <p>I am taking a course on abstract interpretation, but I haven't seen any examples of how the theory maps down to actual code.</p>
<p>I am looking for short code examples, where I preferably won't have to work with a whole compiler. The analysis doesn't have to be useful, I would just like to see an example where the analysis is derived and then implemented.</p>
<p>Does anyone know of any such examples, perhaps from a university course?</p> | 2010-05-28 11:24:42.673000+00:00 | 2016-02-29 09:23:32.833000+00:00 | 2011-06-08 16:47:41.660000+00:00 | abstract-interpretation | ['http://arxiv.org/abs/0810.2179', 'http://www.di.ens.fr/~cousot/COUSOTpapers/MARKTOBERDORF-09.shtml'] | 2 |
34,899,133 | <p>Short answer: No Matlab does not support it (at least not that i'm aware of). Therefore you need to create a whole new model every time you get new input data. Depending on the size of the task this might still be the best choice.</p>
<p>Workaround: You can implement it yourself, by creating a loss function which updates every time. Take a look at this paper if you decide to go this way (it about many kinds of loss function but you are interested in the logistic one):
<a href="http://arxiv.org/abs/1011.1576" rel="nofollow">http://arxiv.org/abs/1011.1576</a></p>
<p>Or you could go Bayesan and update your priors any time a new point comes in. </p> | 2016-01-20 11:44:47.637000+00:00 | 2016-01-20 11:44:47.637000+00:00 | null | null | 34,875,371 | <p>I wish to use online logistic regression training in Matlab in which I train the model by presenting the first sample, evaluate the model, next add the second sample, evaluate etc. etc.</p>
<p>I could do this by first creating a model on the first sample, evaluating it, throw this model away; next create a model on sample one and two, evaluate it etc. etc but this is very ineffecient. Is there a way I could do 'real' online training of the logistic regression model in Matlab?</p> | 2016-01-19 11:17:22.657000+00:00 | 2016-01-20 11:44:47.637000+00:00 | null | matlab|machine-learning|logistic-regression | ['http://arxiv.org/abs/1011.1576'] | 1 |
46,471,620 | <p>After years working in this theme, I can tell now, that what I wanted to do take a big effort, it's quite slow, and NEVER worked as I expected. The irregularities of the pixels in the characters are always unpredictable, that's why "easy algorithms" don't work. </p>
<p>Question: It's impossible then to have a decent OCR, which can read damaged characters? </p>
<p>Answer: No, it's not impossible. But it takes "a bit" more than just using erosion, morphological closing or something like that. </p>
<p>Then, how? Neural Networks :)</p>
<p>Here are two amazing papers that help me a lot:</p>
<p><a href="https://www.researchgate.net/publication/260341307_Can_we_build_language-independent_OCR_using_LSTM_networks" rel="nofollow noreferrer">Can we build language-independent OCR using LSTM networks?</a></p>
<p><a href="https://arxiv.org/pdf/1506.04395v2.pdf" rel="nofollow noreferrer">Reading Scene Text in Deep Convolutional Sequences</a></p>
<p>And for those who aren't familiar with RNN, I can suggest this: </p>
<p><a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="nofollow noreferrer">Understanding LSTM Networks</a></p>
<p>There's also a python library, which works pretty good (and unfortunately even better for C++):</p>
<p><a href="https://github.com/tmbdev/ocropy" rel="nofollow noreferrer">ocropy</a></p>
<p>I really hope this can help someone. </p> | 2017-09-28 14:26:41.077000+00:00 | 2017-09-28 14:26:41.077000+00:00 | null | null | 39,375,498 | <p>I'm working with images that have text. The problem is that these images are receipts, and after a lot of transformations, the text lost quality.
I'm using python and opencv.
I was trying with a lot of combinations of morphological transformations from the doc <a href="http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html" rel="nofollow noreferrer">Morphological Transformations</a>, but I don't get satisfactory results. </p>
<p>I'm doing this right now (I'll comment what I've tried, and just let uncommented what I'm using):</p>
<pre><code>kernel = np.ones((2, 2), np.uint8)
# opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
# closing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)
# dilation = cv2.dilate(opening, kernel, iterations=1)
# kernel = np.ones((3, 3), np.uint8)
erosion = cv2.erode(img, kernel, iterations=1)
# gradient = cv2.morphologyEx(img, cv2.MORPH_GRADIENT, kernel)
#
img = erosion.copy()
</code></pre>
<p>With this, from this original image:</p>
<p><a href="https://i.stack.imgur.com/fKJkH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fKJkH.png" alt="enter image description here"></a></p>
<p>I get this:</p>
<p><a href="https://i.stack.imgur.com/Hvmvk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Hvmvk.png" alt="enter image description here"></a></p>
<p>It's a little bit better, as you can see. But it still too bad. The OCR (tesseract) doesn't recognize the characters here very well. I've trained, but as you can note, every "e" is different, and so on. </p>
<p>I get good results, but I think, if I resolve this problem, they would be even better. </p>
<p>Maybe I can do another thing, or use a better combination of the morphological transformations. If there is another tool (PIL, imagemagick, etc..) that I could use, I can use it. </p>
<p>Here's the whole image, so you can see how it looks:</p>
<p><a href="https://i.stack.imgur.com/AorH6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AorH6.png" alt="enter image description here"></a></p>
<p>As I said, it's not so bad, but a little be more "optimization" of the letters would be perfect. </p> | 2016-09-07 16:53:38.753000+00:00 | 2017-09-28 14:26:41.077000+00:00 | null | python|image|opencv|letters | ['https://www.researchgate.net/publication/260341307_Can_we_build_language-independent_OCR_using_LSTM_networks', 'https://arxiv.org/pdf/1506.04395v2.pdf', 'https://colah.github.io/posts/2015-08-Understanding-LSTMs/', 'https://github.com/tmbdev/ocropy'] | 4 |
60,028,766 | <p>Sorry, but I'm afraid you have to look a bit at the math of the DDPG algorithm here to understand why it is calld "target network". DDPG minimizes the following loss (from the original paper <a href="https://arxiv.org/pdf/1509.02971.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1509.02971.pdf</a>):</p>
<p><a href="https://i.stack.imgur.com/Z9UHm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z9UHm.png" alt="enter image description here"></a></p>
<p>where <strong>Q</strong> is represented by your neural network aka. your "agent" and <strong>y</strong> is the so-called <strong>target</strong>. It is called <em>target</em>, because you want the values of you agent to be close to it. Just for clarification: <em>Q(s_t, a_t | theta)</em> corresponds to the output of your agent at time step <em>t</em>, given state <em>s</em>, action <em>a</em> and network weights <em>theta</em>.</p>
<p>However, as you can see, the target <em>y</em> depends on the same (neural network) parameters theta of your agent. In practice, this dependency leads to instabilities when minimizing the above loss. </p>
<p>One trick to mitigate this problems is to use a "second" target network, where the target network is either</p>
<ul>
<li>a frozen state of the agent ("regular") network and just copied over from the regular network every some-fixed-number of steps (e.g. every 10,000 iterations). This is the approach taken in DQN.</li>
<li>or a lagged version of the actual agent ("regular") network, where the lagging is achieved via so-called polyak averaging. That is, instead of updating the weights of your target network by just copying the ones of regular network, at each iteration you take some sort of weighted average. This is the approach taken in DDPG.</li>
</ul>
<p>So simply put, the target network is nothing else than just a lagged version of the regular network.</p> | 2020-02-02 17:33:49.230000+00:00 | 2020-02-02 17:40:28.580000+00:00 | 2020-02-02 17:40:28.580000+00:00 | null | 59,891,158 | <blockquote>
<p>How does it differ from regular network
Source Text --> "In DDPG algorithm topology consist of two copies of network weights for each network, (Actor: regular and target) and (Critic: regular and target)"</p>
</blockquote> | 2020-01-24 06:04:04.793000+00:00 | 2020-02-02 17:40:28.580000+00:00 | null | reinforcement-learning|policy-gradient-descent | ['https://arxiv.org/pdf/1509.02971.pdf', 'https://i.stack.imgur.com/Z9UHm.png'] | 2 |
16,490,678 | <p>Use <a href="http://arxiv.org/pdf/1011.1533.pdf" rel="noreferrer">log binning</a> (<a href="http://arxiv.org/pdf/0706.1062.pdf" rel="noreferrer">see also</a>). Here is code to take a <code>Counter</code> object representing a histogram of degree values and log-bin the distribution to produce a sparser and smoother distribution.</p>
<pre><code>import numpy as np
def drop_zeros(a_list):
return [i for i in a_list if i>0]
def log_binning(counter_dict,bin_count=35):
max_x = log10(max(counter_dict.keys()))
max_y = log10(max(counter_dict.values()))
max_base = max([max_x,max_y])
min_x = log10(min(drop_zeros(counter_dict.keys())))
bins = np.logspace(min_x,max_base,num=bin_count)
# Based off of: http://stackoverflow.com/questions/6163334/binning-data-in-python-with-scipy-numpy
bin_means_y = (np.histogram(counter_dict.keys(),bins,weights=counter_dict.values())[0] / np.histogram(counter_dict.keys(),bins)[0])
bin_means_x = (np.histogram(counter_dict.keys(),bins,weights=counter_dict.keys())[0] / np.histogram(counter_dict.keys(),bins)[0])
return bin_means_x,bin_means_y
</code></pre>
<p>Generating a classic scale-free network in <code>NetworkX</code> and then plotting this:</p>
<pre><code>import networkx as nx
ba_g = nx.barabasi_albert_graph(10000,2)
ba_c = nx.degree_centrality(ba_g)
# To convert normalized degrees to raw degrees
#ba_c = {k:int(v*(len(ba_g)-1)) for k,v in ba_c.iteritems()}
ba_c2 = dict(Counter(ba_c.values()))
ba_x,ba_y = log_binning(ba_c2,50)
plt.xscale('log')
plt.yscale('log')
plt.scatter(ba_x,ba_y,c='r',marker='s',s=50)
plt.scatter(ba_c2.keys(),ba_c2.values(),c='b',marker='x')
plt.xlim((1e-4,1e-1))
plt.ylim((.9,1e4))
plt.xlabel('Connections (normalized)')
plt.ylabel('Frequency')
plt.show()
</code></pre>
<p>Produces the following plot showing the overlap between the "raw" distribution in blue and the "binned" distribution in red.</p>
<p><img src="https://i.imgur.com/o8piIas.png" alt="Comparison between raw and log-binned"></p>
<p>Thoughts on how to improve this approach or feedback if I've missed something obvious are welcome.</p> | 2013-05-10 20:54:16.513000+00:00 | 2013-05-14 22:01:57.033000+00:00 | 2013-05-14 22:01:57.033000+00:00 | null | 16,489,655 | <p>I have often encountered and made long-tailed degree distributions/histograms from complex networks like the figures below. They make the heavy end of these tails, well, very heavy and crowded from many observations:</p>
<p><img src="https://i.stack.imgur.com/R7RzP.png" alt="Classic long-tailed degree distribution"> </p>
<p>However, many publications I read have much cleaner degree distributions that don't have this clumpiness at the end of the distribution and the observations are more evenly-spaced.</p>
<p>!<img src="https://i.imgur.com/RxAq46b.png" alt="Classic long-tailed degree distribution"></p>
<p>How do you make a chart like this using <code>NetworkX</code> and <code>matplotlib</code>?</p> | 2013-05-10 19:38:00.253000+00:00 | 2014-03-24 18:22:43.887000+00:00 | 2013-05-10 20:53:55.293000+00:00 | python|numpy|matplotlib|networkx|scientific-computing | ['http://arxiv.org/pdf/1011.1533.pdf', 'http://arxiv.org/pdf/0706.1062.pdf'] | 2 |
62,225,443 | <p>A surface dice implementation was provided <a href="https://github.com/deepmind/surface-distance" rel="nofollow noreferrer">here</a> as part of <a href="https://arxiv.org/pdf/1809.04430.pdf" rel="nofollow noreferrer">this</a> study. You can use it as an evaluation metric but not as a loss function as it contains non-differentiable ops. You will need to provide a "tolerance" distance i.e. a surface dice of 0.9 means that 90% of surfaces lie within the tolerance (which is better calculated from the data itself, such as the inter-observer variation of the task you are solving)</p> | 2020-06-05 23:18:39.383000+00:00 | 2020-06-05 23:18:39.383000+00:00 | null | null | 56,685,144 | <p>I would like to compute the <em>Surface Dice-Sørensen Coefficient</em> from this <a href="https://arxiv.org/pdf/1809.04430.pdf" rel="nofollow noreferrer">paper</a> (page 19)in python3/pytorch.</p>
<p>I have to point out, that I do <strong>not</strong> try to implement the simple <em>standard volumetric Dice-Sørensen Coefficient</em>! This one would look as follows in my implementation:</p>
<pre class="lang-py prettyprint-override"><code>import torch
def volumetric_DSC(M1, M2):
M1 = M1.view(-1)
M2 = M2.view(-1)
dividend = 2 * (M1 * M2).sum()
divisor = (M1 * M1).sum() + (M2 * M2).sum()
return dividend / divisor
if __name__ == "__main__":
m1 = torch.empty(5, 5, 5).uniform_(0, 1)
m1 = torch.bernoulli(m1)
m2 = torch.empty(5, 5, 5).uniform_(0, 1)
m2 = torch.bernoulli(m2)
loss = volumetric_DSC(m1, m2)
print("loss = {0}".format(loss))
</code></pre>
<p>How can I extend this code to a Surface Dice-Sørensen Coefficient loss?</p> | 2019-06-20 11:37:50.663000+00:00 | 2020-06-05 23:18:39.383000+00:00 | null | python-3.x|pytorch | ['https://github.com/deepmind/surface-distance', 'https://arxiv.org/pdf/1809.04430.pdf'] | 2 |
46,700,947 | <p>I'm not sure you'll stand to gain much performance by using a gigantic amount of straight-line code instead of much smaller code with loops, since there's significant overhead in continually thrashing the instruction cache for so long, and the overhead of conditional jumps has gotten much better over the past several years. I was dubious when Intel made claims along those lines, and some of their statements were rather hyperbolic, but it has improved a lot in common cases. You can still always avoid call instructions if you need to for simplicity, even for tree recursive functions, by effectively simulating "the stack" with "a stack" (possibly itself on "the stack"), in the worst case.</p>
<p>That leaves two reasons I can think of that you'd want to stick with straight-line code that's only executed once on a modern computer: 1) it's too complicated to figure out how to express what needs to be computed with less code using jumps, or 2) it's an extremely heterogeneous problem being solved that actually needs so much code. #2 is quite uncommon in practice, though possible in a computer theoretical sense; I've just never encountered such a problem. If it's #1 and the issue is just how to efficiently encode the jumps as either short or near jumps, <a href="https://arxiv.org/abs/0812.4973" rel="nofollow noreferrer">there are ways</a>. (I've also just recently gotten back into x86-64 machine code generation in a side project, after years of not touching my assembler/linker, but it's not ready for use yet.)</p>
<p>Anyway, it's a bit hard to know what the stumbling block is, but I suspect that you'll get much better performance if you can figure out a way to avoid generating gigabytes of code, even if it may seem suboptimal on paper. Either way, it's usually best to try several options and see what works best experimentally if it's unclear. I've sometimes found surprising results that way. Best of luck!</p> | 2017-10-12 03:31:05.980000+00:00 | 2017-10-12 03:31:05.980000+00:00 | null | null | 46,698,590 | <p>Dynamically generating code is pretty well-known technique, for example to speed up <a href="https://en.wikipedia.org/wiki/Just-in-time_compilation" rel="nofollow noreferrer">interpreted languages</a>, <a href="https://lwn.net/Articles/437981/" rel="nofollow noreferrer">domain-specific languages</a> and so on. Whether you want to work <a href="https://github.com/asmjit/asmjit" rel="nofollow noreferrer">low-level</a> (close to 1:1 with assembly), or <a href="https://eli.thegreenplace.net/2017/adventures-in-jit-compilation-part-3-llvm/" rel="nofollow noreferrer">high-level</a> you can find libraries you help you out. </p>
<p>Note the distinction between <em>self-modifying code</em> and <em>dynamically-generated code</em>. The former means that some code that has executed will be modified in part and then executed again. The latter means that some code, that doesn't exist statically in the process binary on disk, is written to memory and then executed (but will not necessarily ever be modified). The distinction might be important below or simply because people treat self-modifying code as a smell, but dynamically generated code as a great performance trick.</p>
<p>The usual use-case is that the generated code will be executed many times. This means the focus is usually on the efficiency of the generated code, and to a lesser extent the compilation time, and least of all the mechanics of actually writing the code, making it executable and starting execution.</p>
<p>Imagine however, that your use case was generating code that will execute <em>exactly once</em> and that this is straight-line code without loops. The "compilation" process that generates the code is very fast (close to <code>memcpy</code> speed). In this case, the actual mechanics of writing to the code to memory and executing it once become important for performance. </p>
<p>For example, the total amount of code executed may be 10s of GBs or more. Clearly you don't want to just write all out to a giant buffer without any re-use: this would imply writing 10GB to memory and perhaps also reading 10GB (depending on how generation and execution was interleaved). Instead you'd probably want to use some reasonably sized buffer (say to fit in the L1 or L2 cache): write out a buffer's worth of code, execute it, then overwrite the buffer with the next chunk of code and so on.</p>
<p>The problem is that this seems to raise the spectre of <em>self-modifying code</em>. Although the "overwrite" is complete, you are still overwriting memory that was at one point already executed as instructions. The newly written code has to somehow make its way from the L1D to the L1I, and the <a href="https://stackoverflow.com/q/34017361/149138">associated performance hit is not clear</a>. In particular, there have <a href="https://software.intel.com/en-us/forums/software-tuning-performance-optimization-platform-monitoring/topic/635248" rel="nofollow noreferrer">been reports</a> that simply writing to the code area that has already been executed may suffer penalties of 100s of cycles and that the number of writes may be important.</p>
<p>What's the best way of generating a large about of dynamically generated straight-line code on x86 and executing it?</p> | 2017-10-11 22:36:11.537000+00:00 | 2017-10-12 03:31:05.980000+00:00 | 2017-10-12 01:59:33.517000+00:00 | performance|assembly|x86|jit|micro-optimization | ['https://arxiv.org/abs/0812.4973'] | 1 |
58,834,770 | <p>You should look into <strong>entity embedding</strong> if you are searching for a way to utilize embeddings for categorical variables.</p>
<ul>
<li>google has a good crash course on the topic: <a href="https://developers.google.com/machine-learning/crash-course/embeddings/categorical-input-data" rel="nofollow noreferrer">https://developers.google.com/machine-learning/crash-course/embeddings/categorical-input-data</a></li>
<li>this is a good paper on arxiv written by a team from a Kaggle competition: <a href="https://arxiv.org/abs/1604.06737" rel="nofollow noreferrer">https://arxiv.org/abs/1604.06737</a></li>
</ul> | 2019-11-13 10:13:31.040000+00:00 | 2019-11-13 10:13:31.040000+00:00 | null | null | 58,834,647 | <p>I am facing a binary prediction task and have a set of features of which all are categorical. A key challenge is therefore to encode those categorical features to numbers and I was looking for smart ways to do so.
I stumbled over word2vec, which is mostly used for NLP, but I was wondering whether I could use it to encode my variables, i.e. simply take the weights of the neural net as the encoded features.</p>
<p>However, I am not sure, whether it is a good idea since, the context words, which serve as the input features in word2vec are in my case more or less random, in contrast to real sentences which word2vec was originially made for. </p>
<p>Do you guys have any advice, thoughts, recommendations on this?</p> | 2019-11-13 10:07:03.403000+00:00 | 2019-11-13 19:06:56.787000+00:00 | null | machine-learning|nlp|word2vec|categorical-data|feature-engineering | ['https://developers.google.com/machine-learning/crash-course/embeddings/categorical-input-data', 'https://arxiv.org/abs/1604.06737'] | 2 |
59,425,828 | <p>Your kind of task belongs to dense classification tasks, e.g. segmentation. In those tasks, we use fully convolution nets (see <a href="https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf" rel="nofollow noreferrer">here</a> for the original paper). In the FCNs you don't have any fully-connected layers, because when applying fully-connected layers you lose spatial information which you need for the dense prediction. Also have a look at the <a href="https://arxiv.org/pdf/1505.04597.pdf" rel="nofollow noreferrer">U-Net paper</a>. All state-of-the art architectures use some kind of encoder-decoder architecture extended for example with a pyramid pooling module.</p>
<p>There are some implementations in the pytorch model zoo <a href="https://pytorch.org/docs/stable/torchvision/models.html#semantic-segmentation" rel="nofollow noreferrer">here</a>. Search also Github for pytorch implementations for other networks.</p> | 2019-12-20 13:10:52.730000+00:00 | 2019-12-20 20:09:21.087000+00:00 | 2019-12-20 20:09:21.087000+00:00 | null | 59,425,733 | <p>I am new to ML and Pytorch and I have the following problem:</p>
<p>I am looking for a Fully Convolutional Network architecture in Pytorch, so that the input would be an RGB image (HxWxC or 480x640x3) and the output would be a single channel image (HxW or 480x640). In other words, I am looking for a network that will preserve the resolution of the input (HxW), and will loose the channel dimension. All of the networks that I've came across (ResNet, Densenet, ...) end with a fully connected layer (without any upsampling or deconvolution). This is problematic for two reasons:</p>
<ol>
<li>I am restricted with the choice of the input size (HxWxC).</li>
<li>It has nothing to do with the output that I expect to get (a single channel image HxW).</li>
</ol>
<p>What am I missing? Why is there even a FC layer? Why is there no up-sampling, or some deconvolution layers after feature extraction? Is there any build-in torchvision.model that might suit my requirements? Where can I find such pytorch architecture? As I said, I am new in this field so I don't really like the idea of building such a network from scratch.</p>
<p>Thanks.</p> | 2019-12-20 13:03:05.690000+00:00 | 2019-12-20 20:09:21.087000+00:00 | null | conv-neural-network|pytorch | ['https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf', 'https://arxiv.org/pdf/1505.04597.pdf', 'https://pytorch.org/docs/stable/torchvision/models.html#semantic-segmentation'] | 3 |
14,978,725 | <p>For datasets, see experiments section of <a href="http://arxiv.org/abs/1112.2903" rel="nofollow">this work</a>.</p>
<p>For new algorithms see:</p>
<ul>
<li><p><a href="http://www.wisdom.weizmann.ac.il/~ronen/papers/Vitaladevuni%20Basri%20-%20Co-Clustering%20of%20Image%20Segments%20Using%20Convex%20Optimization%20Applied%20to%20EM%20Neuronal%20Reconstruction.pdf" rel="nofollow">Vitaladevuni and Basri</a> CVPR 2010</p></li>
<li><p><a href="http://arxiv.org/abs/1112.2903" rel="nofollow">Bagon and Galun</a> arXiv 2011.</p></li>
<li><p><a href="http://arxiv.org/abs/1208.0378" rel="nofollow">Yarkony et al</a> ECCV 2012</p></li>
<li><p><a href="http://www.nowozin.net/sebastian/papers/kim2011clustering.pdf" rel="nofollow">Kim et al</a> NIPS 2011</p></li>
<li><p><a href="http://hci.iwr.uni-heidelberg.de/publications/mip/techrep/andres_12_globally.pdf" rel="nofollow">Andres et al</a> ECCV 2012</p></li>
</ul>
<p>Just to name a few. </p> | 2013-02-20 11:29:17.857000+00:00 | 2013-02-20 12:01:34.267000+00:00 | 2013-02-20 12:01:34.267000+00:00 | null | 14,978,668 | <p>I am looking for new datasets of documents, from which to extract the matrix terms-documents, to perform co-clustering algorithms.<br></p>
<p>I am looking forsingle-label datasets only and prefer free access ones.</p>
<p>I already know the following datasets.: <br>
CSTR<br>
WebKB4<br>
Newsgroups<br>
Reuters<br>
K1A, K1B, wap (WebACE Project)<br></p>
<p>Do you know of any others? <br></p>
<p>You also know of the new co-clustering algorithms created in the last two years?
<br><br>
thanks</p> | 2013-02-20 11:25:52.350000+00:00 | 2014-03-23 14:33:56.223000+00:00 | 2013-02-20 12:13:10.957000+00:00 | matlab|dataset|cluster-analysis | ['http://arxiv.org/abs/1112.2903', 'http://www.wisdom.weizmann.ac.il/~ronen/papers/Vitaladevuni%20Basri%20-%20Co-Clustering%20of%20Image%20Segments%20Using%20Convex%20Optimization%20Applied%20to%20EM%20Neuronal%20Reconstruction.pdf', 'http://arxiv.org/abs/1112.2903', 'http://arxiv.org/abs/1208.0378', 'http://www.nowozin.net/sebastian/papers/kim2011clustering.pdf', 'http://hci.iwr.uni-heidelberg.de/publications/mip/techrep/andres_12_globally.pdf'] | 6 |
50,805,280 | <p>You mention you are trying to solve the ABA problem, but the description and code is actually an attempt to solve a harder problem: the <a href="https://arxiv.org/abs/1712.06134" rel="noreferrer">memory reclamation</a> problem.</p>
<p>This problem typically arises in the "deletion" functionality of lock-free collections implemented in languages without garbage collection. The core issue is that a thread removing a node from a shared structure often doesn't know when it is safe to free the removed node as because other reads may still have a reference to it. Solving this problem often, as a side effect, <em>also</em> solves the ABA problem: which is specifically about a CAS operation succeeding even though the underlying pointer (and state of the object) has been been changed at least twice in the meantime, ending up with the original <em>value</em> but presenting a totally different state. </p>
<p>The ABA problem is easier in the sense that there are several straightforward solutions to the ABA problem specifically that don't lead to a solution to the "memory reclamation" problem. It is also easier in the sense that hardware that can detect the modification of the location, e.g., with LL/SC or transactional memory primitives, might not exhibit the problem at all.</p>
<p>So that said, you are hunting for a solution to the memory reclamation problem, and it will also avoid the ABA problem.</p>
<p>The core of your issue is this statement:</p>
<blockquote>
<p>The thread that successfully updates the list then loads the atomic
list.entries, and basically spin-loads atomic.exits until that counter
finally exceeds list.entries. <strong>This should imply that all readers of
the "old" version of the list have completed.</strong> The thread then simply
frees the the list of marked nodes that it swapped off the top of the
list.</p>
</blockquote>
<p>This logic doesn't hold. Waiting for <code>list.exits</code> (you say <em>atomic.exits</em> but I think it's a typo as you only talk about <code>list.exits</code> elsewhere) to be greater than <code>list.entries</code> only tells you there have now been <em>more total exits</em> than there were <em>entries</em> at the point the mutating thread captured the entry count. However, these exits may have been generated by new readers coming and going: it doesn't at all imply that <em>all the old readers have finished</em> as you claim!</p>
<p>Here's a simple example. First a writing thread <code>T1</code> and a reading thread <code>T2</code> access the list around the same time, so <code>list.entries</code> is 2 and <code>list.exits</code> is 0. The writing thread pops an node, and saves the current value (2) of <code>list.entries</code> and waits for <code>lists.exits</code> to be greater than 2. Now three more reading threads, <code>T3</code>, <code>T4</code>, <code>T5</code> arrive and do a quick read of the list and leave. Now <code>lists.exits</code> is 3, and your condition is met and <code>T1</code> frees the node. <code>T2</code> hasn't gone anywhere though and blows up since it is reading a freed node!</p>
<p>The basic idea you have can work, but your two counter approach in particular definitely doesn't work.</p>
<p>This is a well-studied problem, so you don't have to invent your own algorithm (see the link above), or even write your own code since things like <a href="https://liburcu.org/" rel="noreferrer">librcu</a> and <a href="https://github.com/concurrencykit/ck" rel="noreferrer">concurrencykit</a> already exist.</p>
<h3>For Educational Purposes</h3>
<p>If you <em>wanted</em> to make this work for educational purposes though, one approach would be to use ensure that threads coming in after a modification have started use a different set of <code>list.entry/exit</code> counters. One way to do this would be a generation counter, and when the writer wants to modify the list, it increments the generation counter, which causes new readers to switch to a different set of <code>list.entry/exit</code> counters.</p>
<p>Now the writer just has to wait for <code>list.entry[old] == list.exists[old]</code>, which means all the <em>old</em> readers have left. You could also just get away with a single counter per generation: you don't really two <code>entry/exit</code> counters (although it might help reduce contention).</p>
<p>Of course, you know have a new problem of managing this list of separate counters per generation... which kind of looks like the original problem of building a lock-free list! This problem is a bit easier though because you might put some reasonable bound on the number of generations "in flight" and just allocate them all up-front, or you might implement a limited type of lock-free list that is easier to reason about because additions and deletions only occur at the head or tail.</p> | 2018-06-11 20:08:02.353000+00:00 | 2018-06-11 21:04:59.127000+00:00 | 2018-06-11 21:04:59.127000+00:00 | null | 50,803,497 | <p>I came up with an idea I am trying to implement for a lock free stack that does not rely on reference counting to resolve the ABA problem, and also handles memory reclamation properly. It is similar in concept to RCU, and relies on two features: marking a list entry as removed, and tracking readers traversing the list. The former is simple, it just uses the LSB of the pointer. The latter is my "clever" attempt at an approach to implementing an unbounded lock free stack.</p>
<p>Basically, when any thread attempts to traverse the list, one atomic counter (list.entries) is incremented. When the traversal is complete, a second counter (list.exits) is incremented.</p>
<p>Node allocation is handled by push, and deallocation is handled by pop.</p>
<p>The push and pop operations are fairly similar to the naive lock-free stack implementation, but the nodes marked for removal must be traversed to arrive at a non-marked entry. Push basically is therefore much like a linked list insertion.</p>
<p>The pop operation similarly traverses the list, but it uses atomic_fetch_or to mark the nodes as removed while traversing, until it reaches a non-marked node.</p>
<p>After traversing the list of 0 or more marked nodes, a thread that is popping will attempt to CAS the head of the stack. At least one thread concurrently popping will succeed, and after this point all readers entering the stack will no longer see the formerly marked nodes.</p>
<p>The thread that successfully updates the list then loads the atomic list.entries, and basically spin-loads atomic.exits until that counter finally exceeds list.entries. This should imply that all readers of the "old" version of the list have completed. The thread then simply frees the the list of marked nodes that it swapped off the top of the list.</p>
<p>So the implications from the pop operation should be (I think) that there can be no ABA problem because the nodes that are freed are not returned to the usable pool of pointers until all concurrent readers using them have completed, and obviously the memory reclamation issue is handled as well, for the same reason.</p>
<p>So anyhow, that is theory, but I'm still scratching my head on the implementation, because it is currently not working (in the multithreaded case). It seems like I am getting some write after free issues among other things, but I'm having trouble spotting the issue, or maybe my assumptions are flawed and it just won't work.</p>
<p>Any insights would be greatly appreciated, both on the concept, and on approaches to debugging the code.</p>
<p>Here is my current (broken) code (compile with gcc -D_GNU_SOURCE -std=c11 -Wall -O0 -g -pthread -o list list.c):</p>
<pre><code>#include <pthread.h>
#include <stdatomic.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>
#include <sys/resource.h>
#include <stdio.h>
#include <unistd.h>
#define NUM_THREADS 8
#define NUM_OPS (1024 * 1024)
typedef uint64_t list_data_t;
typedef struct list_node_t {
struct list_node_t * _Atomic next;
list_data_t data;
} list_node_t;
typedef struct {
list_node_t * _Atomic head;
int64_t _Atomic size;
uint64_t _Atomic entries;
uint64_t _Atomic exits;
} list_t;
enum {
NODE_IDLE = (0x0),
NODE_REMOVED = (0x1 << 0),
NODE_FREED = (0x1 << 1),
NODE_FLAGS = (0x3),
};
static __thread struct {
uint64_t add_count;
uint64_t remove_count;
uint64_t added;
uint64_t removed;
uint64_t mallocd;
uint64_t freed;
} stats;
#define NODE_IS_SET(p, f) (((uintptr_t)p & f) == f)
#define NODE_SET_FLAG(p, f) ((void *)((uintptr_t)p | f))
#define NODE_CLR_FLAG(p, f) ((void *)((uintptr_t)p & ~f))
#define NODE_POINTER(p) ((void *)((uintptr_t)p & ~NODE_FLAGS))
list_node_t * list_node_new(list_data_t data)
{
list_node_t * new = malloc(sizeof(*new));
new->data = data;
stats.mallocd++;
return new;
}
void list_node_free(list_node_t * node)
{
free(node);
stats.freed++;
}
static void list_add(list_t * list, list_data_t data)
{
atomic_fetch_add_explicit(&list->entries, 1, memory_order_seq_cst);
list_node_t * new = list_node_new(data);
list_node_t * _Atomic * next = &list->head;
list_node_t * current = atomic_load_explicit(next, memory_order_seq_cst);
do
{
stats.add_count++;
while ((NODE_POINTER(current) != NULL) &&
NODE_IS_SET(current, NODE_REMOVED))
{
stats.add_count++;
current = NODE_POINTER(current);
next = &current->next;
current = atomic_load_explicit(next, memory_order_seq_cst);
}
atomic_store_explicit(&new->next, current, memory_order_seq_cst);
}
while(!atomic_compare_exchange_weak_explicit(
next, &current, new,
memory_order_seq_cst, memory_order_seq_cst));
atomic_fetch_add_explicit(&list->exits, 1, memory_order_seq_cst);
atomic_fetch_add_explicit(&list->size, 1, memory_order_seq_cst);
stats.added++;
}
static bool list_remove(list_t * list, list_data_t * pData)
{
uint64_t entries = atomic_fetch_add_explicit(
&list->entries, 1, memory_order_seq_cst);
list_node_t * start = atomic_fetch_or_explicit(
&list->head, NODE_REMOVED, memory_order_seq_cst);
list_node_t * current = start;
stats.remove_count++;
while ((NODE_POINTER(current) != NULL) &&
NODE_IS_SET(current, NODE_REMOVED))
{
stats.remove_count++;
current = NODE_POINTER(current);
current = atomic_fetch_or_explicit(&current->next,
NODE_REMOVED, memory_order_seq_cst);
}
uint64_t exits = atomic_fetch_add_explicit(
&list->exits, 1, memory_order_seq_cst) + 1;
bool result = false;
current = NODE_POINTER(current);
if (current != NULL)
{
result = true;
*pData = current->data;
current = atomic_load_explicit(
&current->next, memory_order_seq_cst);
atomic_fetch_add_explicit(&list->size,
-1, memory_order_seq_cst);
stats.removed++;
}
start = NODE_SET_FLAG(start, NODE_REMOVED);
if (atomic_compare_exchange_strong_explicit(
&list->head, &start, current,
memory_order_seq_cst, memory_order_seq_cst))
{
entries = atomic_load_explicit(&list->entries, memory_order_seq_cst);
while ((int64_t)(entries - exits) > 0)
{
pthread_yield();
exits = atomic_load_explicit(&list->exits, memory_order_seq_cst);
}
list_node_t * end = NODE_POINTER(current);
list_node_t * current = NODE_POINTER(start);
while (current != end)
{
list_node_t * tmp = current;
current = atomic_load_explicit(&current->next, memory_order_seq_cst);
list_node_free(tmp);
current = NODE_POINTER(current);
}
}
return result;
}
static list_t list;
pthread_mutex_t ioLock = PTHREAD_MUTEX_INITIALIZER;
void * thread_entry(void * arg)
{
sleep(2);
int id = *(int *)arg;
for (int i = 0; i < NUM_OPS; i++)
{
bool insert = random() % 2;
if (insert)
{
list_add(&list, i);
}
else
{
list_data_t data;
list_remove(&list, &data);
}
}
struct rusage u;
getrusage(RUSAGE_THREAD, &u);
pthread_mutex_lock(&ioLock);
printf("Thread %d stats:\n", id);
printf("\tadded = %lu\n", stats.added);
printf("\tremoved = %lu\n", stats.removed);
printf("\ttotal added = %ld\n", (int64_t)(stats.added - stats.removed));
printf("\tadded count = %lu\n", stats.add_count);
printf("\tremoved count = %lu\n", stats.remove_count);
printf("\tadd average = %f\n", (float)stats.add_count / stats.added);
printf("\tremove average = %f\n", (float)stats.remove_count / stats.removed);
printf("\tmallocd = %lu\n", stats.mallocd);
printf("\tfreed = %lu\n", stats.freed);
printf("\ttotal mallocd = %ld\n", (int64_t)(stats.mallocd - stats.freed));
printf("\tutime = %f\n", u.ru_utime.tv_sec
+ u.ru_utime.tv_usec / 1000000.0f);
printf("\tstime = %f\n", u.ru_stime.tv_sec
+ u.ru_stime.tv_usec / 1000000.0f);
pthread_mutex_unlock(&ioLock);
return NULL;
}
int main(int argc, char ** argv)
{
struct {
pthread_t thread;
int id;
}
threads[NUM_THREADS];
for (int i = 0; i < NUM_THREADS; i++)
{
threads[i].id = i;
pthread_create(&threads[i].thread, NULL, thread_entry, &threads[i].id);
}
for (int i = 0; i < NUM_THREADS; i++)
{
pthread_join(threads[i].thread, NULL);
}
printf("Size = %ld\n", atomic_load(&list.size));
uint32_t count = 0;
list_data_t data;
while(list_remove(&list, &data))
{
count++;
}
printf("Removed %u\n", count);
}
</code></pre> | 2018-06-11 17:54:00.420000+00:00 | 2022-07-07 02:46:52.663000+00:00 | 2018-06-11 19:19:22.373000+00:00 | c|stack|lockless|rcu|aba | ['https://arxiv.org/abs/1712.06134', 'https://liburcu.org/', 'https://github.com/concurrencykit/ck'] | 3 |
38,190,720 | <p>Dedupe should work fine for data of that size. </p>
<p>There has been some excellent work by <a href="https://people.cs.umass.edu/~mwick/MikeWeb/Publications_files/wick12hierarchical.pdf" rel="nofollow">Michael Wick</a> and <a href="http://arxiv.org/abs/1312.4645" rel="nofollow">Beka Steorts</a> that have better complexity than dedupe. </p> | 2016-07-04 18:50:19.663000+00:00 | 2016-07-05 13:43:49.883000+00:00 | 2016-07-05 13:43:49.883000+00:00 | null | 38,177,496 | <p>I'm working on detecting duplicates in a list of around 5 million addresses, and was wondering if there was consensus on an efficient algorithm for such a purpose. I've looked at the Dedupe library on Gitbub (<a href="https://github.com/datamade/dedupe" rel="nofollow">https://github.com/datamade/dedupe</a>), but based on the documentation I'm not clear that this would scale to a large application well. </p>
<p>As an aside, I'm just looking to define duplicates based on textual similarity - have already done a lot of cleaning on the addresses. I've been using a crude method using Levenshtein distance, but was wondering if there's anything more efficient for large datasets.</p>
<p>Thanks,</p> | 2016-07-04 05:29:47.620000+00:00 | 2016-07-05 13:43:49.883000+00:00 | null | algorithm|text|machine-learning|cluster-analysis | ['https://people.cs.umass.edu/~mwick/MikeWeb/Publications_files/wick12hierarchical.pdf', 'http://arxiv.org/abs/1312.4645'] | 2 |
62,117,589 | <p>Thanks to your help and the informations here <a href="https://medium.com/@leosimmons/double-dqn-implementation-to-solve-openai-gyms-cartpole-v-0-df554cd0614d" rel="nofollow noreferrer">leosimmons</a>, I found the source of my confusion:</p>
<p>The Bellman equation used here <a href="https://arxiv.org/pdf/1509.06461.pdf" rel="nofollow noreferrer">Bellman equation - link 3</a> follows the equation:</p>
<pre><code>value = reward + discount_factor * target_network.predict(next_state)[argmax(online_network.predict(next_state))]
</code></pre>
<p>The Bellman equation in the <strong>original</strong> (vanilla) DQN <a href="https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf" rel="nofollow noreferrer">Bellman equation - link 2</a> is:</p>
<pre><code>value = reward + discount_factor * max(target_network.predict(next_state))
</code></pre>
<p><a href="https://medium.com/@leosimmons/double-dqn-implementation-to-solve-openai-gyms-cartpole-v-0-df554cd0614d" rel="nofollow noreferrer">leosimmons</a> </p>
<blockquote>
<p>The difference is that, using the terminology of the field, the second
equation uses the target network for both SELECTING and EVALUATING the
action to take whereas the first equation uses the online network for
SELECTING the action to take and the target network for EVALUATING the
action. Selection here means choosing which action to take, and
evaluation means getting the projected Q value for that action. This
form of the Bellman equation is what makes this agent a Double DQN and
not just a DQN and was introduced in <a href="https://arxiv.org/pdf/1509.06461.pdf" rel="nofollow noreferrer">3</a>.</p>
</blockquote>
<p><a href="https://medium.com/@leosimmons/double-dqn-implementation-to-solve-openai-gyms-cartpole-v-0-df554cd0614d" rel="nofollow noreferrer">1</a> <a href="https://medium.com/@leosimmons/double-dqn-implementation-to-solve-openai-gyms-cartpole-v-0-df554cd0614d" rel="nofollow noreferrer">https://medium.com/@leosimmons/double-dqn-implementation-to-solve-openai-gyms-cartpole-v-0-df554cd0614d</a></p>
<p><a href="https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf" rel="nofollow noreferrer">2</a> <a href="https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf" rel="nofollow noreferrer">https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf</a></p>
<p><a href="https://arxiv.org/pdf/1509.06461.pdf" rel="nofollow noreferrer">3</a> <a href="https://arxiv.org/pdf/1509.06461.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1509.06461.pdf</a></p>
<p>Very well explained here:
<a href="https://youtu.be/ILDLT97FsNM?t=331" rel="nofollow noreferrer">https://youtu.be/ILDLT97FsNM?t=331</a></p> | 2020-05-31 14:20:18.697000+00:00 | 2020-06-03 15:57:21.317000+00:00 | 2020-06-03 15:57:21.317000+00:00 | null | 59,448,001 | <p>I'm not sure how to get the Q Values for a DDQN. </p>
<p>DQN is the normal network, TAR the target network.</p>
<pre><code> q_values = self.DQN.predict(c_states) # DQN batch predict Q on states
dqn_next = self.DQN.predict(n_states) # DQN batch predict Q on next_states
tar_next = self.TAR.predict(n_states) # TAR batch predict Q on next_states
</code></pre>
<p>I mainly found 2 versions:</p>
<p><strong>Version 1:</strong></p>
<pre><code>q_values[i][actions[i]] = (rewards[i] + (GAMMA * np.amax(tar_next[i])))
</code></pre>
<p><strong>Version 2:</strong></p>
<pre><code>act = np.argmax(dqn_next[i])
q_values[i][actions[i]] = (rewards[i] + (GAMMA * tar_next[i][act]))
</code></pre>
<p>Which one is correct? And why?</p>
<p><strong>Version 1 Links:</strong></p>
<p><a href="https://github.com/keon/deep-q-learning/blob/master/ddqn.py" rel="nofollow noreferrer">https://github.com/keon/deep-q-learning/blob/master/ddqn.py</a></p>
<p><a href="https://pythonprogramming.net/training-deep-q-learning-dqn-reinforcement-learning-python-tutorial" rel="nofollow noreferrer">https://pythonprogramming.net/training-deep-q-learning-dqn-reinforcement-learning-python-tutorial</a></p>
<p><strong>Version 2 Links:</strong></p>
<p><a href="https://github.com/germain-hug/Deep-RL-Keras/blob/master/DDQN/ddqn.py" rel="nofollow noreferrer">https://github.com/germain-hug/Deep-RL-Keras/blob/master/DDQN/ddqn.py</a></p>
<p><a href="https://github.com/rlcode/reinforcement-learning/blob/master/2-cartpole/2-double-dqn/cartpole_ddqn.py" rel="nofollow noreferrer">https://github.com/rlcode/reinforcement-learning/blob/master/2-cartpole/2-double-dqn/cartpole_ddqn.py</a></p>
<p><a href="https://jaromiru.com/2016/11/07/lets-make-a-dqn-double-learning-and-prioritized-experience-replay/" rel="nofollow noreferrer">https://jaromiru.com/2016/11/07/lets-make-a-dqn-double-learning-and-prioritized-experience-replay/</a></p>
<hr>
<p><strong>EDIT:</strong>
Many thanks, to clarify this</p>
<pre><code>Q-learning:
q_values[i][actions[i]] = (rewards[i] + (GAMMA * np.amax(tar_next[i])))
SARSA:
act = np.argmax(dqn_next[i])
q_values[i][actions[i]] = (rewards[i] + (GAMMA * tar_next[i][act]))
</code></pre>
<p><strong>EDIT: re-open 03/2020</strong></p>
<p>I'm sorry but i have to re-open that question. Maybe I misunderstood something, but the following sources show that my Version 2 (SARSA) is Double Q Learning?</p>
<p><strong>Page 158 : Double Q-learning</strong>
<a href="http://incompleteideas.net/book/RLbook2018.pdf" rel="nofollow noreferrer">http://incompleteideas.net/book/RLbook2018.pdf</a></p>
<p><a href="https://adventuresinmachinelearning.com/double-q-reinforcement-learning-in-tensorflow-2/" rel="nofollow noreferrer">adventuresinML</a></p>
<p><a href="https://github.com/adventuresinML/adventures-in-ml-code/blob/e661eeb5db86d2d0aa21621b68b5186d80e3d8b6/double_q_tensorflow2.py#L86" rel="nofollow noreferrer">adventuresinML source</a></p> | 2019-12-22 21:02:04.407000+00:00 | 2020-06-03 15:57:21.317000+00:00 | 2020-03-25 20:49:32.043000+00:00 | python|deep-learning|neural-network|reinforcement-learning | ['https://medium.com/@leosimmons/double-dqn-implementation-to-solve-openai-gyms-cartpole-v-0-df554cd0614d', 'https://arxiv.org/pdf/1509.06461.pdf', 'https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf', 'https://medium.com/@leosimmons/double-dqn-implementation-to-solve-openai-gyms-cartpole-v-0-df554cd0614d', 'https://arxiv.org/pdf/1509.06461.pdf', 'https://medium.com/@leosimmons/double-dqn-implementation-to-solve-openai-gyms-cartpole-v-0-df554cd0614d', 'https://medium.com/@leosimmons/double-dqn-implementation-to-solve-openai-gyms-cartpole-v-0-df554cd0614d', 'https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf', 'https://storage.googleapis.com/deepmind-media/dqn/DQNNaturePaper.pdf', 'https://arxiv.org/pdf/1509.06461.pdf', 'https://arxiv.org/pdf/1509.06461.pdf', 'https://youtu.be/ILDLT97FsNM?t=331'] | 12 |
59,461,739 | <p>This is Q-learning (the version with the max operator) vs SARSA (without the max). </p>
<p>In short, you collect samples using the e-greedy policy: this is your behavior (or exploration) policy. The policy you want to learn is called "target" and can be different.<br>
With Q-learning, you use the max operator, so your target is chosen according to the greedy (target) policy. This is called off-policy learning, because you learn a policy (target) with the samples collected by a different one (behavior).
<br>
With SARSA, there is no max, so in practice you just use the action from the samples, that was selected by the behavior policy. This is on-policy, because the target and the behavior are the same.</p>
<p>Which one to prefer is up to you, but I think that Q-learning is more common (and DQN uses Q-learning).</p>
<p>More reading about this</p>
<p><a href="https://stackoverflow.com/questions/6848828/what-is-the-difference-between-q-learning-and-sarsa">What is the difference between Q-learning and SARSA?</a></p>
<p><a href="https://stackoverflow.com/questions/32846262/are-q-learning-and-sarsa-with-greedy-selection-equivalent">Are Q-learning and SARSA with greedy selection equivalent?</a></p>
<p><a href="https://stats.stackexchange.com/questions/184657/what-is-the-difference-between-off-policy-and-on-policy-learning">https://stats.stackexchange.com/questions/184657/what-is-the-difference-between-off-policy-and-on-policy-learning</a></p>
<p><a href="http://incompleteideas.net/book/RLbook2018.pdf" rel="nofollow noreferrer">http://incompleteideas.net/book/RLbook2018.pdf</a></p>
<p><strong>EDIT FOR DDQN</strong></p>
<p>SARSA and Q-learning are two separate algorithms.
<br>
In DDQN you have two target Q, and two target policies, so the algorithm is still off-policy (sampling policy is e-greedy, target policies are greedy), while SARSA is on-policy (target policy = sampling policy).
<br>
The trick in DDQN is that you use the max operator over Q2 (second critic) in the TD target for updating Q1 (first critic), and viceversa. <strong>But there still is the max, so it's still off-policy. SARSA, instead, is on-policy.</strong></p>
<p>There are multiple versions of DDQN, some use the mininum over Q1 and Q2 for instance. Here are some references</p>
<p><a href="https://arxiv.org/pdf/1509.06461.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1509.06461.pdf</a></p>
<p><a href="https://arxiv.org/pdf/1802.09477.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1802.09477.pdf</a></p> | 2019-12-23 22:06:09.733000+00:00 | 2020-03-27 20:38:01.773000+00:00 | 2020-03-27 20:38:01.773000+00:00 | null | 59,448,001 | <p>I'm not sure how to get the Q Values for a DDQN. </p>
<p>DQN is the normal network, TAR the target network.</p>
<pre><code> q_values = self.DQN.predict(c_states) # DQN batch predict Q on states
dqn_next = self.DQN.predict(n_states) # DQN batch predict Q on next_states
tar_next = self.TAR.predict(n_states) # TAR batch predict Q on next_states
</code></pre>
<p>I mainly found 2 versions:</p>
<p><strong>Version 1:</strong></p>
<pre><code>q_values[i][actions[i]] = (rewards[i] + (GAMMA * np.amax(tar_next[i])))
</code></pre>
<p><strong>Version 2:</strong></p>
<pre><code>act = np.argmax(dqn_next[i])
q_values[i][actions[i]] = (rewards[i] + (GAMMA * tar_next[i][act]))
</code></pre>
<p>Which one is correct? And why?</p>
<p><strong>Version 1 Links:</strong></p>
<p><a href="https://github.com/keon/deep-q-learning/blob/master/ddqn.py" rel="nofollow noreferrer">https://github.com/keon/deep-q-learning/blob/master/ddqn.py</a></p>
<p><a href="https://pythonprogramming.net/training-deep-q-learning-dqn-reinforcement-learning-python-tutorial" rel="nofollow noreferrer">https://pythonprogramming.net/training-deep-q-learning-dqn-reinforcement-learning-python-tutorial</a></p>
<p><strong>Version 2 Links:</strong></p>
<p><a href="https://github.com/germain-hug/Deep-RL-Keras/blob/master/DDQN/ddqn.py" rel="nofollow noreferrer">https://github.com/germain-hug/Deep-RL-Keras/blob/master/DDQN/ddqn.py</a></p>
<p><a href="https://github.com/rlcode/reinforcement-learning/blob/master/2-cartpole/2-double-dqn/cartpole_ddqn.py" rel="nofollow noreferrer">https://github.com/rlcode/reinforcement-learning/blob/master/2-cartpole/2-double-dqn/cartpole_ddqn.py</a></p>
<p><a href="https://jaromiru.com/2016/11/07/lets-make-a-dqn-double-learning-and-prioritized-experience-replay/" rel="nofollow noreferrer">https://jaromiru.com/2016/11/07/lets-make-a-dqn-double-learning-and-prioritized-experience-replay/</a></p>
<hr>
<p><strong>EDIT:</strong>
Many thanks, to clarify this</p>
<pre><code>Q-learning:
q_values[i][actions[i]] = (rewards[i] + (GAMMA * np.amax(tar_next[i])))
SARSA:
act = np.argmax(dqn_next[i])
q_values[i][actions[i]] = (rewards[i] + (GAMMA * tar_next[i][act]))
</code></pre>
<p><strong>EDIT: re-open 03/2020</strong></p>
<p>I'm sorry but i have to re-open that question. Maybe I misunderstood something, but the following sources show that my Version 2 (SARSA) is Double Q Learning?</p>
<p><strong>Page 158 : Double Q-learning</strong>
<a href="http://incompleteideas.net/book/RLbook2018.pdf" rel="nofollow noreferrer">http://incompleteideas.net/book/RLbook2018.pdf</a></p>
<p><a href="https://adventuresinmachinelearning.com/double-q-reinforcement-learning-in-tensorflow-2/" rel="nofollow noreferrer">adventuresinML</a></p>
<p><a href="https://github.com/adventuresinML/adventures-in-ml-code/blob/e661eeb5db86d2d0aa21621b68b5186d80e3d8b6/double_q_tensorflow2.py#L86" rel="nofollow noreferrer">adventuresinML source</a></p> | 2019-12-22 21:02:04.407000+00:00 | 2020-06-03 15:57:21.317000+00:00 | 2020-03-25 20:49:32.043000+00:00 | python|deep-learning|neural-network|reinforcement-learning | ['https://stackoverflow.com/questions/6848828/what-is-the-difference-between-q-learning-and-sarsa', 'https://stackoverflow.com/questions/32846262/are-q-learning-and-sarsa-with-greedy-selection-equivalent', 'https://stats.stackexchange.com/questions/184657/what-is-the-difference-between-off-policy-and-on-policy-learning', 'http://incompleteideas.net/book/RLbook2018.pdf', 'https://arxiv.org/pdf/1509.06461.pdf', 'https://arxiv.org/pdf/1802.09477.pdf'] | 6 |
43,220,021 | <p>I do not actually have a direct answer to your question either it be Pseudo Code or an actual implementation of an algorithm in a specific language but what I can do here is give you a list of references that I think are related to the topic that may help guide you into developing a working algorithm:</p>
<ul>
<li><a href="https://books.google.com/books?id=wfC7LPTcRmYC&pg=PA44&lpg=PA44&dq=Optimal%20partitioning%20algorithms%20with%20different%20block%20sizes&source=bl&ots=vldXFsHVjt&sig=bOUyG3J7jDeyANRTJzNHeMdhuhs&hl=en&sa=X&ved=0ahUKEwju0sPT_IvTAhWI3YMKHUiFA6IQ6AEIMDAG#v=onepage&q=Optimal%20partitioning%20algorithms%20with%20different%20block%20sizes&f=false" rel="nofollow noreferrer">High Performance Heterogeneous Computing</a></li>
<li><a href="https://oup.silverchair-cdn.com/oup/backfile/Content_public/Journal/bioinformatics/24/23/10.1093/bioinformatics/btn519/2/btn519.pdf?Expires=1491697174&Signature=MIoTpQq3MtRK0jKLaJCI7McBfU3KX3fHWPUAxQGj39idTKoPKbFshytL8m5rwblmHZe6E8LHSq5P04rAGDUQm7tNiQrEvbuQQ3dBWM8XAjTSCygZ9CVcd1Mj-hS8vJFk5~dGdDVVswS1cSJrdEyrMexmz6PaB3t1QLB7aQWHyUUlWSqCKmSOAFN0a7js3Qagdbzk4MPuLbu2Hcp~U0fybqqxVMftSCrBbTCDi0puqEJc1h1M8BeR3KoFCqmNteEt7Ln1NIzl0~MWbfxLLRDuEs38lFxb0kk~KOg6jhjzLHhuzttbIsQx9yRy726EgoMaoG2Iu5OjXko73dBRE3p16A__&Key-Pair-Id=APKAIUCZBIA4LVPAVW3Q" rel="nofollow noreferrer">Genetics and Population Analysis: A better block partition and ligation strategy for individual
haplotyping</a></li>
<li><a href="http://ieeexplore.ieee.org/abstract/document/5397582/?reload=true" rel="nofollow noreferrer">Multiple block-size search algorithm for fast block motion estimation</a></li>
<li><a href="https://books.google.com/books?id=WCOTC2UmXT8C&pg=PA180&lpg=PA180&dq=Optimal%20partitioning%20algorithms%20with%20different%20block%20sizes&source=bl&ots=88pYITiuor&sig=dtjC2DdjlipdvxqDKPjFxRW5TMw&hl=en&sa=X&ved=0ahUKEwju0sPT_IvTAhWI3YMKHUiFA6IQ6AEIKjAD#v=onepage&q=Optimal%20partitioning%20algorithms%20with%20different%20block%20sizes&f=false" rel="nofollow noreferrer">Parallel Computing: Architectures, Algorithms, and Applications</a></li>
<li><a href="https://pdfs.semanticscholar.org/5eba/a29dcea0c44cedc4e4f4db74983c7805b6f7.pdf" rel="nofollow noreferrer">An Algorithm for Optimal Partitioning of
Data on an Interval</a></li>
<li><a href="http://www.cs.upc.edu/~lfrias/research/parpar/parpar.pdf" rel="nofollow noreferrer">Parallel Partition Revisited</a></li>
<li><a href="https://books.google.com/books?id=Z6ltCQAAQBAJ&pg=PA175&lpg=PA175&dq=Optimal%20partitioning%20algorithms%20with%20different%20block%20sizes&source=bl&ots=ldghK7FzF_&sig=oLGE5jaIRGSsEIJ6CT2s1LQdKLA&hl=en&sa=X&ved=0ahUKEwju0sPT_IvTAhWI3YMKHUiFA6IQ6AEIUDAL#v=onepage&q=Optimal%20partitioning%20algorithms%20with%20different%20block%20sizes&f=false" rel="nofollow noreferrer">Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques</a></li>
<li><a href="http://people.csail.mit.edu/ruhl/papers/1998-icip.pdf" rel="nofollow noreferrer">OPTIMAL HIERARCHICAL PARTITIONS FOR FRACTAL IMAGE COMPRESSION</a></li>
<li><a href="https://arxiv.org/pdf/cs/0211010.pdf" rel="nofollow noreferrer">Efficient Tree Layout in a Multilevel Memory Hierarchy</a></li>
<li><a href="http://bmcevolbiol.biomedcentral.com/articles/10.1186/1471-2148-14-82" rel="nofollow noreferrer">Selecting optimal partitioning schemes for phylogenomic datasets</a></li>
</ul>
<p>Although many of these may not exactly be the Knapsack algorithmic problem; I think these topics may be related in a way to help with your overall achievement for your algorithm. I think these would be useful due to the fact that first and foremost the Knapsack problem is itself a variant of a Partitioning Algorithm that has many implementations and schemes. Also the use of parallel programming and multi threaded programming may also help on large datasets. I found these handful of books and whitepapers that I think would be of a great read.</p>
<p>Basically you have a Knapsack <code>K</code> that has a volume <code>KV</code> that needs to be subdivided into smaller volumes of <code>{KV1, KV2, ... KVn}</code> that holds different data types that each type has a <code>value</code>, <code>weight</code> and <code>category or classification</code> to it and the item's <code>weight</code> represents the portion of volume that it consumes. You also have the constraints that there is a <code>[min, max]</code> bounds with the limitation that you must have at least one of each <code>category</code> or <code>classification</code>. Then with these parameters as your base scenario you then want to maximize <code>KV</code> to contain as many <code>elements</code> as possible but want to do it as efficiently as possible that takes the least amount of time with hopefully <code>linear to polynomial - time and space complexity</code> avoiding <code>quadratic and/or exponential - time and space complexities</code>.</p>
<p>Looking at other algorithms that are uniquely different such as partitioning algorithms, population densities and growth, image compressions, etc. can give you insight into your specific problem as the overall foundations and caveats of these algorithms are similar in nature. </p> | 2017-04-05 00:44:12.280000+00:00 | 2017-04-05 00:44:12.280000+00:00 | null | null | 43,146,315 | <p>I have a problem which is similar to the Knapsack problem, more specifically the <a href="https://en.wikipedia.org/wiki/Knapsack_problem#Variations" rel="nofollow noreferrer">multidimensional variation</a>.</p>
<p>I have a bunch of objects which all have a cost, a value, and a category. I need to the Knapsack optimisation for value under a maximum cost, but also have a specific number of objects in each category.</p>
<p>I have successfully implemented in C++ the original knapsack algorithm, without paying attention to the categories.</p>
<p>When I tried to add the categories, I figured out that I could simply treat this as a multidimensional knapsack problem, which each categories being a weight of either 0 or 1 in a new dimension.</p>
<p>My main problem is that I do not only have a maximum, ex: 5 objects of type food, but also a minimum, since I need <strong>exactly</strong> 5 objects of type food.</p>
<p>And I can't figure out how to add a minimum into the algorithm.</p>
<p>Obviously, I can use a general case, where every dimension has a maximum and minimum, and optimise for total, since all my dimensions but one only have a range of 1, so this would end up optimising for value anyway. Furthermore, I can set the minimum for value to zero, to avoid having one dimension without a minimum, and it would still work.</p>
<p>I'm working in C++, but honestly even pseudo-code would be fine, I just need the algorithm.</p>
<p>Obviously I also need it to be fast, if possible as fast as the <a href="https://en.wikipedia.org/wiki/Knapsack_problem#Variations" rel="nofollow noreferrer">multidimensional variation</a>.</p>
<p>Here is an example of the test case. As this is mostly an optimization problem, the instance is huge, but it should work on any instance size. The number of possible categories and number of category fields is fixed.</p>
<p>You have a backpack that can hold a maximum of 100 units of weight, and a list of 1000 objects, each object having a value, a weight and a type. You specifically need to bring exactly 10 objects of type food, 15 objects of type clothing and 5 Tools. Every object has a completely arbitrary (but greater than 0) value in dollars, and weight in units. I would need to find the optimal configuration for value respecting the maximum weight and the specific number of each type of items.</p>
<p>The list of objects will always contain at least one valid configuration, which means that it will always have at least enough objects of every type that will end up Under the maximum weight, so I don't have to plan for the "no answer" case. I just have to find the best answer for a (probably) huge number of available items.</p> | 2017-03-31 17:16:49.740000+00:00 | 2019-07-09 06:28:24.603000+00:00 | 2019-07-09 06:28:24.603000+00:00 | c++|arrays|algorithm|multidimensional-array|knapsack-problem | ['https://books.google.com/books?id=wfC7LPTcRmYC&pg=PA44&lpg=PA44&dq=Optimal%20partitioning%20algorithms%20with%20different%20block%20sizes&source=bl&ots=vldXFsHVjt&sig=bOUyG3J7jDeyANRTJzNHeMdhuhs&hl=en&sa=X&ved=0ahUKEwju0sPT_IvTAhWI3YMKHUiFA6IQ6AEIMDAG#v=onepage&q=Optimal%20partitioning%20algorithms%20with%20different%20block%20sizes&f=false', 'https://oup.silverchair-cdn.com/oup/backfile/Content_public/Journal/bioinformatics/24/23/10.1093/bioinformatics/btn519/2/btn519.pdf?Expires=1491697174&Signature=MIoTpQq3MtRK0jKLaJCI7McBfU3KX3fHWPUAxQGj39idTKoPKbFshytL8m5rwblmHZe6E8LHSq5P04rAGDUQm7tNiQrEvbuQQ3dBWM8XAjTSCygZ9CVcd1Mj-hS8vJFk5~dGdDVVswS1cSJrdEyrMexmz6PaB3t1QLB7aQWHyUUlWSqCKmSOAFN0a7js3Qagdbzk4MPuLbu2Hcp~U0fybqqxVMftSCrBbTCDi0puqEJc1h1M8BeR3KoFCqmNteEt7Ln1NIzl0~MWbfxLLRDuEs38lFxb0kk~KOg6jhjzLHhuzttbIsQx9yRy726EgoMaoG2Iu5OjXko73dBRE3p16A__&Key-Pair-Id=APKAIUCZBIA4LVPAVW3Q', 'http://ieeexplore.ieee.org/abstract/document/5397582/?reload=true', 'https://books.google.com/books?id=WCOTC2UmXT8C&pg=PA180&lpg=PA180&dq=Optimal%20partitioning%20algorithms%20with%20different%20block%20sizes&source=bl&ots=88pYITiuor&sig=dtjC2DdjlipdvxqDKPjFxRW5TMw&hl=en&sa=X&ved=0ahUKEwju0sPT_IvTAhWI3YMKHUiFA6IQ6AEIKjAD#v=onepage&q=Optimal%20partitioning%20algorithms%20with%20different%20block%20sizes&f=false', 'https://pdfs.semanticscholar.org/5eba/a29dcea0c44cedc4e4f4db74983c7805b6f7.pdf', 'http://www.cs.upc.edu/~lfrias/research/parpar/parpar.pdf', 'https://books.google.com/books?id=Z6ltCQAAQBAJ&pg=PA175&lpg=PA175&dq=Optimal%20partitioning%20algorithms%20with%20different%20block%20sizes&source=bl&ots=ldghK7FzF_&sig=oLGE5jaIRGSsEIJ6CT2s1LQdKLA&hl=en&sa=X&ved=0ahUKEwju0sPT_IvTAhWI3YMKHUiFA6IQ6AEIUDAL#v=onepage&q=Optimal%20partitioning%20algorithms%20with%20different%20block%20sizes&f=false', 'http://people.csail.mit.edu/ruhl/papers/1998-icip.pdf', 'https://arxiv.org/pdf/cs/0211010.pdf', 'http://bmcevolbiol.biomedcentral.com/articles/10.1186/1471-2148-14-82'] | 10 |
53,935,120 | <p>@Michael, I am happy that you step in as you definitely know more than me on this :) . I am on a learning journey at this point. At your request here is one of the paper that inspired my understanding:</p>
<blockquote>
<p>arxiv.org/abs/1801.02911 (SPARQL querying of Property Graphs using
Gremlin Traversals)</p>
</blockquote>
<p>I quote them </p>
<blockquote>
<p>"We present a comprehensive empirical evaluation of Gremlinator and
demonstrate its validity and applicability by executing SPARQL queries
on top of the leading graph stores Neo4J, Sparksee and Apache
TinkerGraph and compare the performance with the RDF stores Virtuoso,
4Store and JenaTDB. Our evaluation demonstrates the substantial
performance gain obtained by the Gremlin counterparts of the SPARQL
queries, especially for star-shaped and complex queries."</p>
</blockquote>
<p>They explain however that things depends somehow on the type of queries.</p>
<p>Or as another answer put that in stack overflow <a href="https://stackoverflow.com/questions/13046442/comparison-of-relational-databases-and-graph-databases">Comparison of Relational Databases and Graph Databases</a> would also help understand the issue between Set and path. My understanding is that TripleStore works with Set too. This being said i am definitely not aware of all the optimization technics implemented in TripleStore lately, and i saw several papers explaining technics to significantly prune set join operation. </p>
<p>On distribution it is more a guts feelings. For instance, doing join operation in a distributed fashion sounds very but very expensive to me. I don't have the papers and my research is not exhaustive on the matters. But from what I have red and I will have to dig in my Evernote :) to back it, that's the fundamental problem with distribution. Automated smart sharding here seems not to help alleviate the issue.</p>
<p>@Michael this a very but very complex subject. I'm definitively on the journey and that's why i am helping myself with stackoverflow to guide my research. You probably have an idea of as to why. So feel free to provides with pointers indeed. </p>
<p>This being said, I am not saying that there is a problem with RDF and that Property-Graph are better. I am saying that somehow, when it comes to graph traversal, there are ways of implementing a backend that makes this fast. The data model is not the issue here, the data structure used to support the traversal is the issue. The second thing that i am saying is that, it seems that the choice of the query language influence how the "traversal" is performed and hence the data structure that is used to back the data model. </p>
<p>That's my understanding so far, and yes I do understand that there are a lot of other factor at play, and feel free to enumerate some of them to guide my journey. </p>
<p>In short my question comes down to, is it possible to have RDF stores backed by a so-called Native Graph Storage and then Implement Sparql in term of Traversal steps rather than joins over set as per its algebra ? Wouldn't that makes things a bit faster. It seems to be that this is somewhat the approach taken by <a href="https://github.com/graknlabs/grakn" rel="nofollow noreferrer">https://github.com/graknlabs/grakn</a> which is primarily backed by janusGraph for a graph like storage. Although it is not RDF, Graql is the same Idea as having RDFS++ + Sparql. They claim to just do it better, for which i have my reservation, but that's not the fundamental question of this thread. The bottom line is they back knowledge representation by the information retrieval (path traversal) and the accompanying storage approach that Property-Graph championed. Let me be clear on this, I am not saying that the graph native storage is the property of property graph. It is just in my mind a storage approach optimized to store Graph Structure where the information retrieval involve (path) traversal: <a href="https://docs.janusgraph.org/latest/data-model.html" rel="nofollow noreferrer">https://docs.janusgraph.org/latest/data-model.html</a>.</p> | 2018-12-26 17:24:10.803000+00:00 | 2018-12-26 19:47:58.503000+00:00 | 2018-12-26 19:47:58.503000+00:00 | null | 53,919,402 | <p>Sparql based store or put another way, TripleStore, are known to be less efficient than property graph store, on top of not being able to be distributed while maintaining performance as property graph. </p>
<p>I understand that there are a lot of things at stake here, such as inferencing and what not. Putting distribution and inferencing aside where we could limit ourself to RDFS which can be fully captured via SPARQL, I am wondering why that is ? </p>
<p>More specifically why is the storage the issue. What is limiting Sparql Based store to store data as Property graph store does, and performing traversal instead of massive join queries. Can't sparql simply be translated to Gremlin steps for instance ? What is the limitation there? Can't the join be avoided ?</p>
<p>My assumption is, if sparql can be translated in efficient step traversal, and data is stored as property graph do, such as as janusGraph does <a href="https://docs.janusgraph.org/latest/data-model.html" rel="nofollow noreferrer">https://docs.janusgraph.org/latest/data-model.html</a> , then the issue of performance would be bridged while maintaining some inference such as RDFS. </p>
<p>This being said, Sparql is not Turing-complete of course, but at least for what it does, it would do it fast and possibly at scale as well. The goal is not to compete in my view, but to benefit for SPARQL ease of use and using traversal language like gremlin for things that really requires it e.g. OLAP. </p>
<p>Is there any project in that direction, has Apache jena considered any of this? </p>
<p>I saw that Graql of Grakn seem to be using that road for the reason I explain above, hence what's stopping the TripleStore community ?</p> | 2018-12-25 04:45:42.077000+00:00 | 2018-12-26 19:47:58.503000+00:00 | 2018-12-25 13:54:37.907000+00:00 | sparql|gremlin|triplestore|vaticle-typeql|property-graph | ['https://stackoverflow.com/questions/13046442/comparison-of-relational-databases-and-graph-databases', 'https://github.com/graknlabs/grakn', 'https://docs.janusgraph.org/latest/data-model.html'] | 3 |
69,008,772 | <p>The GAN training is inherently unstable because of simultaneous dynamic training of two competing models. Tried plotting the loss values from your question and the loss of discriminator and generator looks like below:</p>
<p><a href="https://i.stack.imgur.com/9KZjG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9KZjG.png" alt="enter image description here" /></a></p>
<p>Looking at the loss and the generated images, we can say that the training fails to converge. This failure is due to not finding an equilibrium between the discriminator and the generator. we see that the loss for the discriminator is close to zero and, the loss of the generator rises and is unstable resulting in garbage images that the discriminator can easily identify as fake.</p>
<p>Discriminator classifies both the real data and the fake data from the generator. Discriminator loss is when it penalizes itself for misclassifying a real instance as fake, or a fake instance (created by the generator).</p>
<p>The generator loss is based on the discriminator’s classification – it gets rewarded if it successfully fools the discriminator, and gets penalized otherwise. GAN as zero-sum non-cooperative game, the win is either the discriminator's or the generator's. If one wins, the other loses. Convergence happens at Nash equilibrium which is when one's action doesn't affect the other. Read more into it here <a href="https://jonathan-hui.medium.com/gan-why-it-is-so-hard-to-train-generative-advisory-networks-819a86b3750b" rel="nofollow noreferrer">https://jonathan-hui.medium.com/gan-why-it-is-so-hard-to-train-generative-advisory-networks-819a86b3750b</a> and <a href="https://jonathan-hui.medium.com/gan-what-is-wrong-with-the-gan-cost-function-6f594162ce01" rel="nofollow noreferrer">https://jonathan-hui.medium.com/gan-what-is-wrong-with-the-gan-cost-function-6f594162ce01</a> provides a deeper insight into the GAN challenges.</p>
<p>The convergence failure could also happen due to the mode collapse and Diminishing gradient. Also, In addition to the exploding gradients solution suggested by Nihal,</p>
<ol>
<li><p>Try implementing early stopping in the model based on the metrics <strong>Inception Score, Modified Inception Score, Frechet Inception Distance, Wasserstein distance</strong> (Taken from this paper <a href="https://arxiv.org/pdf/1802.03446.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1802.03446.pdf</a>) These measures help identify the model convergence and would automatically stop once the model converges.</p>
</li>
<li><p>It is also shown that Spectral Normalization, a particular kind of normalization applied on the convolutional kernels, can greatly help the stability of the training. <a href="https://arxiv.org/pdf/1802.05957.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1802.05957.pdf</a></p>
</li>
<li><p>Making the training of the discriminator more difficult could help. Adding noise to both real images and images from generator helps increase the complexity of the discriminator training.</p>
</li>
</ol>
<p>Increasing the iterations doesn't always improve the model. More training iterations, beyond some point of training stability may or may not result in higher quality images due to high-variance loss.And since GANs are relatively new, the research direction on challenges faced are still open and debatable.</p> | 2021-09-01 06:28:34.067000+00:00 | 2021-09-01 14:17:53.027000+00:00 | 2021-09-01 14:17:53.027000+00:00 | null | 68,904,476 | <p>I'm trying to create GAN model.
This is my discriminator.py</p>
<pre><code>import torch.nn as nn
class D(nn.Module):
feature_maps = 64
kernel_size = 4
stride = 2
padding = 1
bias = False
inplace = True
def __init__(self):
super(D, self).__init__()
self.main = nn.Sequential(
nn.Conv2d(4, self.feature_maps, self.kernel_size, self.stride, self.padding, bias=self.bias),
nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps, self.feature_maps * 2, self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(self.feature_maps * 2), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * 2, self.feature_maps * (2 * 2), self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(self.feature_maps * (2 * 2)), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * (2 * 2), self.feature_maps * (2 * 2 * 2), self.kernel_size, self.stride,
self.padding, bias=self.bias),
nn.BatchNorm2d(self.feature_maps * (2 * 2 * 2)), nn.LeakyReLU(0.2, inplace=self.inplace),
nn.Conv2d(self.feature_maps * (2 * 2 * 2), 1, self.kernel_size, 1, 0, bias=self.bias),
nn.Sigmoid()
)
def forward(self, input):
output = self.main(input)
return output.view(-1)
</code></pre>
<p>this is my generator.py</p>
<pre><code>import torch.nn as nn
class G(nn.Module):
feature_maps = 512
kernel_size = 4
stride = 2
padding = 1
bias = False
def __init__(self, input_vector):
super(G, self).__init__()
self.main = nn.Sequential(
nn.ConvTranspose2d(input_vector, self.feature_maps, self.kernel_size, 1, 0, bias=self.bias),
nn.BatchNorm2d(self.feature_maps), nn.ReLU(True),
nn.ConvTranspose2d(self.feature_maps, int(self.feature_maps // 2), self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(int(self.feature_maps // 2)), nn.ReLU(True),
nn.ConvTranspose2d(int(self.feature_maps // 2), int((self.feature_maps // 2) // 2), self.kernel_size, self.stride,
self.padding,
bias=self.bias),
nn.BatchNorm2d(int((self.feature_maps // 2) // 2)), nn.ReLU(True),
nn.ConvTranspose2d((int((self.feature_maps // 2) // 2)), int(((self.feature_maps // 2) // 2) // 2), self.kernel_size,
self.stride, self.padding,
bias=self.bias),
nn.BatchNorm2d(int((self.feature_maps // 2) // 2) // 2), nn.ReLU(True),
nn.ConvTranspose2d(int(((self.feature_maps // 2) // 2) // 2), 4, self.kernel_size, self.stride, self.padding,
bias=self.bias),
nn.Tanh()
)
def forward(self, input):
output = self.main(input)
return output
</code></pre>
<p>This is my gans.py</p>
<pre><code># Importing the libraries
from __future__ import print_function
import torch.nn as nn
import torch.optim as optim
import torch.utils.data
import torchvision.datasets as dset
import torchvision.transforms as transforms
import torchvision.utils as vutils
from torch.autograd import Variable
from generator import G
from discriminator import D
import os
from PIL import Image
batchSize = 64 # We set the size of the batch.
imageSize = 64 # We set the size of the generated images (64x64).
input_vector = 100
nb_epochs = 500
# Creating the transformations
transform = transforms.Compose([transforms.Resize((imageSize, imageSize)), transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5, 0.5), (0.5, 0.5, 0.5,
0.5)), ]) # We create a list of transformations (scaling, tensor conversion, normalization) to apply to the input images.
def pil_loader_rgba(path: str) -> Image.Image:
with open(path, 'rb') as f:
img = Image.open(f)
return img.convert('RGBA')
# Loading the dataset
dataset = dset.ImageFolder(root='./data', transform=transform, loader=pil_loader_rgba)
dataloader = torch.utils.data.DataLoader(dataset, batch_size=batchSize, shuffle=True,
num_workers=2) # We use dataLoader to get the images of the training set batch by batch.
# Defining the weights_init function that takes as input a neural network m and that will initialize all its weights.
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
def is_cuda_available():
return torch.cuda.is_available()
def is_gpu_available():
if is_cuda_available():
if int(torch.cuda.device_count()) > 0:
return True
return False
return False
# Create results directory
def create_dir(name):
if not os.path.exists(name):
os.makedirs(name)
# Creating the generator
netG = G(input_vector)
netG.apply(weights_init)
# Creating the discriminator
netD = D()
netD.apply(weights_init)
if is_gpu_available():
netG.cuda()
netD.cuda()
# Training the DCGANs
criterion = nn.BCELoss()
optimizerD = optim.Adam(netD.parameters(), lr=0.0002, betas=(0.5, 0.999))
optimizerG = optim.Adam(netG.parameters(), lr=0.0002, betas=(0.5, 0.999))
generator_model = 'generator_model'
discriminator_model = 'discriminator_model'
def save_model(epoch, model, optimizer, error, filepath, noise=None):
if os.path.exists(filepath):
os.remove(filepath)
torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': error,
'noise': noise
}, filepath)
def load_checkpoint(filepath):
if os.path.exists(filepath):
return torch.load(filepath)
return None
def main():
print("Device name : " + torch.cuda.get_device_name(0))
for epoch in range(nb_epochs):
for i, data in enumerate(dataloader, 0):
checkpointG = load_checkpoint(generator_model)
checkpointD = load_checkpoint(discriminator_model)
if checkpointG:
netG.load_state_dict(checkpointG['model_state_dict'])
optimizerG.load_state_dict(checkpointG['optimizer_state_dict'])
if checkpointD:
netD.load_state_dict(checkpointD['model_state_dict'])
optimizerD.load_state_dict(checkpointD['optimizer_state_dict'])
# 1st Step: Updating the weights of the neural network of the discriminator
netD.zero_grad()
# Training the discriminator with a real image of the dataset
real, _ = data
if is_gpu_available():
input = Variable(real.cuda()).cuda()
target = Variable(torch.ones(input.size()[0]).cuda()).cuda()
else:
input = Variable(real)
target = Variable(torch.ones(input.size()[0]))
output = netD(input)
errD_real = criterion(output, target)
# Training the discriminator with a fake image generated by the generator
if is_gpu_available():
noise = Variable(torch.randn(input.size()[0], input_vector, 1, 1)).cuda()
target = Variable(torch.zeros(input.size()[0])).cuda()
else:
noise = Variable(torch.randn(input.size()[0], input_vector, 1, 1))
target = Variable(torch.zeros(input.size()[0]))
fake = netG(noise)
output = netD(fake.detach())
errD_fake = criterion(output, target)
# Backpropagating the total error
errD = errD_real + errD_fake
errD.backward()
optimizerD.step()
# 2nd Step: Updating the weights of the neural network of the generator
netG.zero_grad()
if is_gpu_available():
target = Variable(torch.ones(input.size()[0])).cuda()
else:
target = Variable(torch.ones(input.size()[0]))
output = netD(fake)
errG = criterion(output, target)
errG.backward()
optimizerG.step()
# 3rd Step: Printing the losses and saving the real images and the generated images of the minibatch every 100 steps
print('[%d/%d][%d/%d] Loss_D: %.4f Loss_G: %.4f' % (
epoch, nb_epochs, i, len(dataloader), errD.data, errG.data))
save_model(epoch, netG, optimizerG, errG, generator_model, noise)
save_model(epoch, netD, optimizerD, errD, discriminator_model, noise)
if i % 100 == 0:
create_dir('results')
vutils.save_image(real, '%s/real_samples.png' % "./results", normalize=True)
fake = netG(noise)
vutils.save_image(fake.data, '%s/fake_samples_epoch_%03d.png' % ("./results", epoch), normalize=True)
if __name__ == "__main__":
main()
</code></pre>
<p>So AFTER few hours I decided to look at my results folder. I saw weird thing AFTER 39th epoch.
Generator started generating worst images. Until 39th epoch generator IMPROVED.
Pls look at below Screenshot.
<a href="https://i.stack.imgur.com/V16hS.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/V16hS.jpg" alt="enter image description here" /></a></p>
<p>Why generator suddenly became worst ?
I'm trying to run 500 epochs. I thought more epochs more success</p>
<p>So I had a look at logs and I'm seeing below</p>
<pre><code>[40/500][0/157] Loss_D: 0.0141 Loss_G: 5.7559
[40/500][1/157] Loss_D: 0.0438 Loss_G: 5.5805
[40/500][2/157] Loss_D: 0.0161 Loss_G: 6.4947
[40/500][3/157] Loss_D: 0.0138 Loss_G: 7.1711
[40/500][4/157] Loss_D: 0.0547 Loss_G: 4.6262
[40/500][5/157] Loss_D: 0.0295 Loss_G: 4.7831
[40/500][6/157] Loss_D: 0.0103 Loss_G: 6.3700
[40/500][7/157] Loss_D: 0.0276 Loss_G: 5.9162
[40/500][8/157] Loss_D: 0.0205 Loss_G: 6.3571
[40/500][9/157] Loss_D: 0.0139 Loss_G: 6.4961
[40/500][10/157] Loss_D: 0.0117 Loss_G: 6.4371
[40/500][11/157] Loss_D: 0.0057 Loss_G: 6.6858
[40/500][12/157] Loss_D: 0.0203 Loss_G: 5.4308
[40/500][13/157] Loss_D: 0.0078 Loss_G: 6.5749
[40/500][14/157] Loss_D: 0.0115 Loss_G: 6.3202
[40/500][15/157] Loss_D: 0.0187 Loss_G: 6.2258
[40/500][16/157] Loss_D: 0.0052 Loss_G: 6.5253
[40/500][17/157] Loss_D: 0.0158 Loss_G: 5.5672
[40/500][18/157] Loss_D: 0.0156 Loss_G: 5.5416
[40/500][19/157] Loss_D: 0.0306 Loss_G: 5.4550
[40/500][20/157] Loss_D: 0.0077 Loss_G: 6.1985
[40/500][21/157] Loss_D: 0.0158 Loss_G: 5.3092
[40/500][22/157] Loss_D: 0.0167 Loss_G: 5.8395
[40/500][23/157] Loss_D: 0.0119 Loss_G: 6.0849
[40/500][24/157] Loss_D: 0.0104 Loss_G: 6.5493
[40/500][25/157] Loss_D: 0.0182 Loss_G: 5.6758
[40/500][26/157] Loss_D: 0.0145 Loss_G: 5.8336
[40/500][27/157] Loss_D: 0.0050 Loss_G: 6.8472
[40/500][28/157] Loss_D: 0.0080 Loss_G: 6.4894
[40/500][29/157] Loss_D: 0.0186 Loss_G: 5.5563
[40/500][30/157] Loss_D: 0.0143 Loss_G: 6.4144
[40/500][31/157] Loss_D: 0.0377 Loss_G: 5.4557
[40/500][32/157] Loss_D: 0.0540 Loss_G: 4.6034
[40/500][33/157] Loss_D: 0.0200 Loss_G: 5.6417
[40/500][34/157] Loss_D: 0.0189 Loss_G: 5.7760
[40/500][35/157] Loss_D: 0.0197 Loss_G: 6.1732
[40/500][36/157] Loss_D: 0.0093 Loss_G: 6.4046
[40/500][37/157] Loss_D: 0.0281 Loss_G: 5.5217
[40/500][38/157] Loss_D: 0.0410 Loss_G: 5.9157
[40/500][39/157] Loss_D: 0.0667 Loss_G: 5.2522
[40/500][40/157] Loss_D: 0.0530 Loss_G: 5.6412
[40/500][41/157] Loss_D: 0.0315 Loss_G: 5.9325
[40/500][42/157] Loss_D: 0.0097 Loss_G: 6.7819
[40/500][43/157] Loss_D: 0.0157 Loss_G: 5.8630
[40/500][44/157] Loss_D: 0.0382 Loss_G: 5.1942
[40/500][45/157] Loss_D: 0.0331 Loss_G: 5.1490
[40/500][46/157] Loss_D: 0.0362 Loss_G: 5.7026
[40/500][47/157] Loss_D: 0.0237 Loss_G: 5.7493
[40/500][48/157] Loss_D: 0.0227 Loss_G: 5.7636
[40/500][49/157] Loss_D: 0.0230 Loss_G: 5.6500
[40/500][50/157] Loss_D: 0.0329 Loss_G: 5.4542
[40/500][51/157] Loss_D: 0.0306 Loss_G: 5.6473
[40/500][52/157] Loss_D: 0.0254 Loss_G: 5.8464
[40/500][53/157] Loss_D: 0.0402 Loss_G: 5.8609
[40/500][54/157] Loss_D: 0.0242 Loss_G: 5.9952
[40/500][55/157] Loss_D: 0.0400 Loss_G: 5.8378
[40/500][56/157] Loss_D: 0.0302 Loss_G: 5.8990
[40/500][57/157] Loss_D: 0.0239 Loss_G: 5.8134
[40/500][58/157] Loss_D: 0.0348 Loss_G: 5.8109
[40/500][59/157] Loss_D: 0.0361 Loss_G: 5.9011
[40/500][60/157] Loss_D: 0.0418 Loss_G: 5.8825
[40/500][61/157] Loss_D: 0.0501 Loss_G: 6.2302
[40/500][62/157] Loss_D: 0.0184 Loss_G: 6.2755
[40/500][63/157] Loss_D: 0.0273 Loss_G: 5.9655
[40/500][64/157] Loss_D: 0.0250 Loss_G: 5.7513
[40/500][65/157] Loss_D: 0.0298 Loss_G: 6.0434
[40/500][66/157] Loss_D: 0.0299 Loss_G: 6.4280
[40/500][67/157] Loss_D: 0.0205 Loss_G: 6.3743
[40/500][68/157] Loss_D: 0.0173 Loss_G: 6.2749
[40/500][69/157] Loss_D: 0.0199 Loss_G: 6.0541
[40/500][70/157] Loss_D: 0.0309 Loss_G: 6.5044
[40/500][71/157] Loss_D: 0.0177 Loss_G: 6.6093
[40/500][72/157] Loss_D: 0.0363 Loss_G: 7.2993
[40/500][73/157] Loss_D: 0.0093 Loss_G: 7.6995
[40/500][74/157] Loss_D: 0.0087 Loss_G: 7.3493
[40/500][75/157] Loss_D: 0.0540 Loss_G: 8.2688
[40/500][76/157] Loss_D: 0.0172 Loss_G: 8.3312
[40/500][77/157] Loss_D: 0.0086 Loss_G: 7.6863
[40/500][78/157] Loss_D: 0.0232 Loss_G: 7.4930
[40/500][79/157] Loss_D: 0.0175 Loss_G: 7.8834
[40/500][80/157] Loss_D: 0.0109 Loss_G: 9.5329
[40/500][81/157] Loss_D: 0.0093 Loss_G: 7.3253
[40/500][82/157] Loss_D: 0.0674 Loss_G: 10.6709
[40/500][83/157] Loss_D: 0.0010 Loss_G: 10.8321
[40/500][84/157] Loss_D: 0.0083 Loss_G: 8.5728
[40/500][85/157] Loss_D: 0.0124 Loss_G: 6.9085
[40/500][86/157] Loss_D: 0.0181 Loss_G: 7.0867
[40/500][87/157] Loss_D: 0.0130 Loss_G: 7.3527
[40/500][88/157] Loss_D: 0.0189 Loss_G: 7.2494
[40/500][89/157] Loss_D: 0.0302 Loss_G: 8.7555
[40/500][90/157] Loss_D: 0.0147 Loss_G: 7.7668
[40/500][91/157] Loss_D: 0.0325 Loss_G: 7.7779
[40/500][92/157] Loss_D: 0.0257 Loss_G: 8.3955
[40/500][93/157] Loss_D: 0.0113 Loss_G: 8.3687
[40/500][94/157] Loss_D: 0.0124 Loss_G: 7.6081
[40/500][95/157] Loss_D: 0.0088 Loss_G: 7.6012
[40/500][96/157] Loss_D: 0.0241 Loss_G: 7.6573
[40/500][97/157] Loss_D: 0.0522 Loss_G: 10.8114
[40/500][98/157] Loss_D: 0.0071 Loss_G: 11.0529
[40/500][99/157] Loss_D: 0.0043 Loss_G: 8.0707
[40/500][100/157] Loss_D: 0.0141 Loss_G: 7.2864
[40/500][101/157] Loss_D: 0.0234 Loss_G: 7.3585
[40/500][102/157] Loss_D: 0.0148 Loss_G: 7.4577
[40/500][103/157] Loss_D: 0.0190 Loss_G: 8.1904
[40/500][104/157] Loss_D: 0.0201 Loss_G: 8.1518
[40/500][105/157] Loss_D: 0.0220 Loss_G: 9.1069
[40/500][106/157] Loss_D: 0.0108 Loss_G: 9.0069
[40/500][107/157] Loss_D: 0.0044 Loss_G: 8.0970
[40/500][108/157] Loss_D: 0.0076 Loss_G: 7.2699
[40/500][109/157] Loss_D: 0.0052 Loss_G: 7.4036
[40/500][110/157] Loss_D: 0.0167 Loss_G: 7.2742
[40/500][111/157] Loss_D: 0.0032 Loss_G: 7.9825
[40/500][112/157] Loss_D: 0.3462 Loss_G: 32.6314
[40/500][113/157] Loss_D: 0.1704 Loss_G: 40.6010
[40/500][114/157] Loss_D: 0.0065 Loss_G: 44.4607
[40/500][115/157] Loss_D: 0.0142 Loss_G: 43.9761
[40/500][116/157] Loss_D: 0.0160 Loss_G: 45.0376
[40/500][117/157] Loss_D: 0.0042 Loss_G: 45.9534
[40/500][118/157] Loss_D: 0.0061 Loss_G: 45.2998
[40/500][119/157] Loss_D: 0.0023 Loss_G: 45.4654
[40/500][120/157] Loss_D: 0.0033 Loss_G: 44.6643
[40/500][121/157] Loss_D: 0.0042 Loss_G: 44.6020
[40/500][122/157] Loss_D: 0.0002 Loss_G: 44.4807
[40/500][123/157] Loss_D: 0.0004 Loss_G: 44.0402
[40/500][124/157] Loss_D: 0.0055 Loss_G: 43.9188
[40/500][125/157] Loss_D: 0.0021 Loss_G: 43.1988
[40/500][126/157] Loss_D: 0.0008 Loss_G: 41.6770
[40/500][127/157] Loss_D: 0.0001 Loss_G: 40.8719
[40/500][128/157] Loss_D: 0.0009 Loss_G: 40.3803
[40/500][129/157] Loss_D: 0.0023 Loss_G: 39.0143
[40/500][130/157] Loss_D: 0.0254 Loss_G: 39.0317
[40/500][131/157] Loss_D: 0.0008 Loss_G: 37.9451
[40/500][132/157] Loss_D: 0.0253 Loss_G: 37.1046
[40/500][133/157] Loss_D: 0.0046 Loss_G: 36.2807
[40/500][134/157] Loss_D: 0.0025 Loss_G: 35.5878
[40/500][135/157] Loss_D: 0.0011 Loss_G: 33.6500
[40/500][136/157] Loss_D: 0.0061 Loss_G: 33.5011
[40/500][137/157] Loss_D: 0.0015 Loss_G: 30.0363
[40/500][138/157] Loss_D: 0.0019 Loss_G: 31.0197
[40/500][139/157] Loss_D: 0.0027 Loss_G: 28.4693
[40/500][140/157] Loss_D: 0.0189 Loss_G: 27.3072
[40/500][141/157] Loss_D: 0.0051 Loss_G: 26.6637
[40/500][142/157] Loss_D: 0.0077 Loss_G: 24.8390
[40/500][143/157] Loss_D: 0.0123 Loss_G: 23.8334
[40/500][144/157] Loss_D: 0.0014 Loss_G: 23.3755
[40/500][145/157] Loss_D: 0.0036 Loss_G: 19.6341
[40/500][146/157] Loss_D: 0.0025 Loss_G: 18.1076
[40/500][147/157] Loss_D: 0.0029 Loss_G: 16.9415
[40/500][148/157] Loss_D: 0.0028 Loss_G: 16.4647
[40/500][149/157] Loss_D: 0.0048 Loss_G: 14.6184
[40/500][150/157] Loss_D: 0.0074 Loss_G: 13.2544
[40/500][151/157] Loss_D: 0.0053 Loss_G: 13.0052
[40/500][152/157] Loss_D: 0.0070 Loss_G: 11.8815
[40/500][153/157] Loss_D: 0.0078 Loss_G: 12.1657
[40/500][154/157] Loss_D: 0.0094 Loss_G: 10.4259
[40/500][155/157] Loss_D: 0.0073 Loss_G: 9.9345
[40/500][156/157] Loss_D: 0.0082 Loss_G: 9.7609
[41/500][0/157] Loss_D: 0.0079 Loss_G: 9.2920
[41/500][1/157] Loss_D: 0.0134 Loss_G: 8.5241
[41/500][2/157] Loss_D: 0.0156 Loss_G: 8.6983
[41/500][3/157] Loss_D: 0.0250 Loss_G: 8.1148
[41/500][4/157] Loss_D: 0.0160 Loss_G: 8.3324
[41/500][5/157] Loss_D: 0.0187 Loss_G: 7.6281
[41/500][6/157] Loss_D: 0.0191 Loss_G: 7.4707
[41/500][7/157] Loss_D: 0.0092 Loss_G: 8.3976
[41/500][8/157] Loss_D: 0.0118 Loss_G: 7.9800
[41/500][9/157] Loss_D: 0.0126 Loss_G: 7.3999
[41/500][10/157] Loss_D: 0.0165 Loss_G: 7.0854
[41/500][11/157] Loss_D: 0.0095 Loss_G: 7.6392
[41/500][12/157] Loss_D: 0.0079 Loss_G: 7.3862
[41/500][13/157] Loss_D: 0.0181 Loss_G: 7.3812
[41/500][14/157] Loss_D: 0.0168 Loss_G: 6.9518
[41/500][15/157] Loss_D: 0.0094 Loss_G: 7.8525
[41/500][16/157] Loss_D: 0.0165 Loss_G: 7.3024
[41/500][17/157] Loss_D: 0.0029 Loss_G: 8.4487
[41/500][18/157] Loss_D: 0.0169 Loss_G: 7.0449
[41/500][19/157] Loss_D: 0.0167 Loss_G: 7.1307
[41/500][20/157] Loss_D: 0.0255 Loss_G: 6.7970
[41/500][21/157] Loss_D: 0.0154 Loss_G: 6.9745
[41/500][22/157] Loss_D: 0.0110 Loss_G: 6.9925
</code></pre>
<p>As you can see there is a HUGE change happened to Generator loss(Loss_G).</p>
<p>Any idea why that happened ?</p>
<p>Any idea how to overcome such a problem ?</p> | 2021-08-24 08:49:51.383000+00:00 | 2021-09-01 14:17:53.027000+00:00 | null | python|pytorch|conv-neural-network|artificial-intelligence|generative-adversarial-network | ['https://i.stack.imgur.com/9KZjG.png', 'https://jonathan-hui.medium.com/gan-why-it-is-so-hard-to-train-generative-advisory-networks-819a86b3750b', 'https://jonathan-hui.medium.com/gan-what-is-wrong-with-the-gan-cost-function-6f594162ce01', 'https://arxiv.org/pdf/1802.03446.pdf', 'https://arxiv.org/pdf/1802.05957.pdf'] | 5 |
67,302,942 | <p>This question is now long unanswered so I thought I'd answer with my solution. I implemented retinanet for a number of satellite detection problems with good results. This CNN is outlined in the paper Focal Loss for Dense Object Detection which you can find here: <a href="https://arxiv.org/abs/1708.02002" rel="nofollow noreferrer">https://arxiv.org/abs/1708.02002</a>. I used this keras library for implementation: <a href="https://github.com/fizyr/keras-retinanet" rel="nofollow noreferrer">https://github.com/fizyr/keras-retinanet</a>.</p>
<p>I've used it to detect seals in drone imagery: <a href="https://bigdata.duke.edu/projects/deep-learning-aerial-wildlife-surveillance" rel="nofollow noreferrer">https://bigdata.duke.edu/projects/deep-learning-aerial-wildlife-surveillance</a></p>
<p>Birds in drone imagery: <a href="https://research.repository.duke.edu/concern/datasets/kp78gh20s" rel="nofollow noreferrer">https://research.repository.duke.edu/concern/datasets/kp78gh20s</a></p>
<p>And even whales in satellite imagery. All of which it did well with minimal adjustment.</p> | 2021-04-28 15:12:52.313000+00:00 | 2021-04-28 15:12:52.313000+00:00 | null | null | 50,977,740 | <p>I'm looking to detect boats in large satellite scenes of the ocean. I'm successfully applied <a href="https://github.com/matterport/Mask_RCNN" rel="noreferrer">matterport's Mask-RCNN setup</a> on small subsets of satellite imagery but it is way too slow to analyze huge images like WorldView. I'm looking for something fast that can do bounding boxes, is in python, implemented in Keras, and ideally optimized (or well documented so I can optimize it) for satellite imagery. Any suggestions?</p>
<p>I've found a couple promising leads:</p>
<ul>
<li>You Only Look Twice, YOLO variant optimized for satellite imagery but built in C and not super well documented
<ul>
<li>Code: <a href="https://github.com/avanetten/yolt" rel="noreferrer">https://github.com/avanetten/yolt</a></li>
<li>Paper: <a href="https://arxiv.org/pdf/1805.09512.pdf" rel="noreferrer">https://arxiv.org/pdf/1805.09512.pdf</a></li>
</ul></li>
<li>RasterVision: a general python based framework for applying CNNs to satellite imagery, looks promising but nascent
<ul>
<li>Code: <a href="https://github.com/azavea/raster-vision" rel="noreferrer">https://github.com/azavea/raster-vision</a></li>
</ul></li>
<li>This Kaggle competition has some promising info but at ~18 months old is somewhat outdated:
<ul>
<li>Link: <a href="https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection" rel="noreferrer">https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection</a></li>
</ul></li>
</ul>
<p>I may try to customize this implementation of <a href="https://github.com/fizyr/keras-retinanet" rel="noreferrer">RetinaNet in Keras</a> for satellite imagery following the suggestions from the YOLT paper but would love other suggestions!</p> | 2018-06-21 21:36:16.250000+00:00 | 2021-04-28 15:12:52.313000+00:00 | null | python|keras|object-detection|convolutional-neural-network|satellite-image | ['https://arxiv.org/abs/1708.02002', 'https://github.com/fizyr/keras-retinanet', 'https://bigdata.duke.edu/projects/deep-learning-aerial-wildlife-surveillance', 'https://research.repository.duke.edu/concern/datasets/kp78gh20s'] | 4 |
69,927,660 | <p>First, be aware that your issue is probably better covered by some legal approach (a contract reviewed by a paid lawyer) than by technical means.</p>
<p>Your approach is similar to <a href="https://en.wikipedia.org/wiki/Caesar_cipher" rel="nofollow noreferrer">Caesar cypher</a> (which has been broken thousands of years ago: insight: compute frequencies of letters; in human English, <code>e</code> is the most frequent one). Even the German <a href="https://en.wikipedia.org/wiki/Enigma_machine" rel="nofollow noreferrer">Enigma machine</a> did a lot better in WW2. Read about the works of <a href="https://en.wikipedia.org/wiki/Alan_Turing" rel="nofollow noreferrer">Alan Turing</a> during WW2 (his team broke the Enigma machine encryption).</p>
<blockquote>
<p>Is there some elegant way to do it at compile time, perhaps using the C preprocessor and a macro?</p>
</blockquote>
<h2>No, there is not</h2>
<p>(and mathematical proofs of that exist in the literature, covered by <em>books</em> related to <a href="https://frama-c.com/" rel="nofollow noreferrer">Frama-C</a> or cybersecurity or <a href="https://coq.inria.fr/" rel="nofollow noreferrer">Coq</a> proof assistant; be aware of <a href="https://en.wikipedia.org/wiki/Rice%27s_theorem" rel="nofollow noreferrer">Rice's theorem</a>; Read also Berto-Caseran book on <em>Interactive Theorem Proving and Software Development</em> ISBN 3-540-20854-2)</p>
<p>The argument of such a proof is based on cardinality. You could also use a probabilistic approach: store in your program some cryptic hashcode (e.g. computed by <a href="https://man7.org/linux/man-pages/man3/crypt.3.html" rel="nofollow noreferrer">crypt(3)</a> at build time) and ask from user input a secret key, etc...</p>
<p>Any professional hacker will be technically able (perhaps after weeks of work) to find your "secret" string. Or colleagues working on or with <a href="https://binsec.github.io/" rel="nofollow noreferrer">BinSec</a>.</p>
<p>However, you could write some metaprogram generating your obfuscated string as C code (to be <code>#include</code>-d at compile time), and add into your program some deobfuscation routine.</p>
<blockquote>
<p>I'm fine with something that is gcc specific.</p>
</blockquote>
<p>On large programs, consider developing your <a href="https://gcc.gnu.org/onlinedocs/gccint/Plugins.html" rel="nofollow noreferrer">GCC plugin</a> (perhaps starting with <a href="https://github.com/bstarynk/bismon/" rel="nofollow noreferrer">Bismon</a>). See also the <a href="https://decoder-project.Eu/" rel="nofollow noreferrer">DECODER</a> project.</p>
<p>Be however aware of <a href="https://en.wikipedia.org/wiki/Rice%27s_theorem" rel="nofollow noreferrer">Rice's theorem</a>. Read about <a href="https://en.wikipedia.org/wiki/P_versus_NP_problem" rel="nofollow noreferrer">P vs NP problem</a>.</p>
<p>Consider also generating some C code (maybe some <code>#include</code>-d header) with tools like <a href="https://logological.org/gpp" rel="nofollow noreferrer">GPP</a>.</p>
<p><a href="https://en.wikipedia.org/wiki/Obfuscation_(software)" rel="nofollow noreferrer">Code obfuscation</a> is a topic which has conferences. Did you attend any of them? Many papers exist in ACM conferences.</p>
<p>There could be also legal issues (perhaps related to <a href="https://gdpr-info.eu/" rel="nofollow noreferrer">GDPR</a>). You should contact your lawyer. In France, see article <a href="https://www.legifrance.gouv.fr/codes/article_lc/LEGIARTI000030939438/" rel="nofollow noreferrer">323</a> du Code Pénal.</p>
<p>If your code runs on a computer connected to the Internet and interacting with a user, consider a <a href="https://en.wikipedia.org/wiki/Software_as_a_service" rel="nofollow noreferrer">SaaS</a> approach: you could ask some money with a <a href="https://en.wikipedia.org/wiki/Visa_Inc." rel="nofollow noreferrer">VISA</a> card at every run (or once a month).... Your bank will sell you appropriate software and permissions.</p>
<blockquote>
<p>I'm writing a game for 8 year olds and the string to be hidden is a URL to be called only once they beat the game and their name will be added to the hall of fame. It's reasonable to assume that most 8 year olds will not have skills that go beyond opening the binary file in a hex editor.</p>
</blockquote>
<p>I now no 8 years old kid able to do that, and those who do deserves to be added to your hall of fame. If indeed you are coding a <em>game</em>, I recommend putting the URL as clear text.</p>
<p>NB. The old <a href="https://en.wikipedia.org/wiki/X_PixMap" rel="nofollow noreferrer">XPM</a> program could be inspirational, and so can be <a href="http://refpersys.org/" rel="nofollow noreferrer">RefPerSys</a> and Jacques Pitrat's last book <em>Artificial Beings, the conscience of a conscious machine</em> (ISBN-13: 978-1848211018). Feel free to contact me by email <code>[email protected]</code> (home) or <code>[email protected]</code> (office, at <a href="https://www-list.cea.fr/" rel="nofollow noreferrer">CEA LIST</a>) for more.</p>
<p>PS. Consider of course starting your PhD on that topic! In France, at <a href="https://www.ens.fr/" rel="nofollow noreferrer">ENS</a> or <a href="https://en.wikipedia.org/wiki/%C3%89cole_Polytechnique" rel="nofollow noreferrer">Ecole Polytechnique</a>. There are interesting related talks at <a href="https://www.college-de-france.fr/site/college/index.htm" rel="nofollow noreferrer">College de France</a>. In Germany, <a href="https://www.fraunhofer.de/en/institutes/cooperation/learning-labs-cyber-security.html" rel="nofollow noreferrer">Frauhaufer CyberSecurity lab</a>. Probably, the <a href="https://www.bundeswehr.de/de/" rel="nofollow noreferrer">Bundeswehr</a> will fund your research in Germany (but I have no connections there), and also <a href="https://itea4.org/" rel="nofollow noreferrer">ITEA4</a>. Of course, you will spend three or four years full-time to find a good enough solution. Please publish papers on <a href="https://arxiv.org/corr" rel="nofollow noreferrer">arxiv</a>.</p> | 2021-11-11 11:37:38.840000+00:00 | 2021-11-11 13:13:52.617000+00:00 | 2021-11-11 13:13:52.617000+00:00 | null | 69,927,341 | <p>I want to obfuscate a particular string in the binary of a C program to make it harder to analyze. I <em>know</em> this will not prevent someone from seeing the string if running it in a debugger. Yes, this is merely obfuscation.</p>
<p>Every instance of obfuscation triggers a discussion saying it has no value whatsoever. So did this one! <em>I am aware that a capable and determined attacker will be able to recover the string. For the sake of the argument let's say I'm writing a game for X year olds and the string to be hidden is a URL to be called only once they beat the game and their name will be added to the hall of fame. It's reasonable to assume that most X year olds will not have skills that go beyond opening the binary file in a hex editor. Thanks!</em></p>
<p>Is there some elegant way to do the hiding at compile time, perhaps using the C preprocessor and a macro?</p>
<p>What i have seen so far is <a href="https://yurisk.info/2017/06/25/binary-obfuscation-string-obfuscating-in-C/" rel="nofollow noreferrer">a suggestion by Yuri Slobodyanyuk</a> resulting in this:</p>
<pre><code>#define HIDE_LETTER(a) (a) + 0x50
#define UNHIDE_STRING(str) do { char * ptr = str ; while (*ptr) *ptr++ -= 0x50; } while(0)
...
char str1[] = { HIDE_LETTER('s'), HIDE_LETTER('e'), HIDE_LETTER('c'), HIDE_LETTER('r'), HIDE_LETTER('e'),
HIDE_LETTER('t'), '\0' };
UNHIDE_STRING(str1); // unmangle the string in-place
</code></pre>
<p>It works but it's a bit ugly. Perhaps someone knows a better solution?</p>
<p>I'm fine with something that is gcc specific.</p>
<p>PS: For C++ there is <a href="https://github.com/adamyaxley/Obfuscate" rel="nofollow noreferrer">a solution by Adam Yaxley on github</a> but I'm looking for C, not C++. And there's a solution with a little helper program at <a href="https://github.com/TwizzyIndy/hkteam_obfuscator" rel="nofollow noreferrer">https://github.com/TwizzyIndy/hkteam_obfuscator</a></p> | 2021-11-11 11:15:13.320000+00:00 | 2021-11-16 15:04:03.320000+00:00 | 2021-11-11 14:24:30.043000+00:00 | c|string|macros|obfuscation|preprocessor | ['https://en.wikipedia.org/wiki/Caesar_cipher', 'https://en.wikipedia.org/wiki/Enigma_machine', 'https://en.wikipedia.org/wiki/Alan_Turing', 'https://frama-c.com/', 'https://coq.inria.fr/', 'https://en.wikipedia.org/wiki/Rice%27s_theorem', 'https://man7.org/linux/man-pages/man3/crypt.3.html', 'https://binsec.github.io/', 'https://gcc.gnu.org/onlinedocs/gccint/Plugins.html', 'https://github.com/bstarynk/bismon/', 'https://decoder-project.Eu/', 'https://en.wikipedia.org/wiki/Rice%27s_theorem', 'https://en.wikipedia.org/wiki/P_versus_NP_problem', 'https://logological.org/gpp', 'https://en.wikipedia.org/wiki/Obfuscation_(software)', 'https://gdpr-info.eu/', 'https://www.legifrance.gouv.fr/codes/article_lc/LEGIARTI000030939438/', 'https://en.wikipedia.org/wiki/Software_as_a_service', 'https://en.wikipedia.org/wiki/Visa_Inc.', 'https://en.wikipedia.org/wiki/X_PixMap', 'http://refpersys.org/', 'https://www-list.cea.fr/', 'https://www.ens.fr/', 'https://en.wikipedia.org/wiki/%C3%89cole_Polytechnique', 'https://www.college-de-france.fr/site/college/index.htm', 'https://www.fraunhofer.de/en/institutes/cooperation/learning-labs-cyber-security.html', 'https://www.bundeswehr.de/de/', 'https://itea4.org/', 'https://arxiv.org/corr'] | 29 |
34,345,772 | <p>If I understand correctly from your question + comment, what you want is to have an agent that performs discrete actions using a visual input (raw pixels from a camera). This looks exactly like what DeepMind guys recently did, extending the paper you mentioned. Have a look at <a href="http://rdcu.be/cdlg" rel="nofollow">this</a>. It is the newer (and better) version of playing Atari games. They also provide an official implementation, which you can download <a href="https://sites.google.com/a/deepmind.com/dqn/" rel="nofollow">here</a>.
There is even <a href="https://github.com/tambetm/simple_dqn" rel="nofollow">an implementation in Neon</a> which works pretty well. </p>
<p>Finally, if you want to use continuous actions, you might be interested in this <a href="http://arxiv.org/pdf/1509.02971v2.pdf" rel="nofollow">very recent paper</a>.</p>
<p><em>To recap: yes, somebody combined DNN + RL, it works and if you want to use raw camera data to train an agent with RL, this is definitely one way to go :)</em></p> | 2015-12-17 22:49:58.690000+00:00 | 2015-12-17 22:49:58.690000+00:00 | null | null | 34,246,008 | <p>I'm using joint positions from a Kinect camera as my state space but I think it's going to be too large (25 joints x 30 per second) to just feed into SARSA or Qlearning. </p>
<p>Right now I'm using the Kinect Gesture Builder program which uses Supervised Learning to associate user movement to specific gestures. But that requires supervised training which I'd like to move away from. I figure the algorithm might pick up certain associations between joints that I would when I classify the data myself (hands up, step left, step right, for example). </p>
<p>I think feeding that data into a deep neural network and then pass that into a reinforcement learning algorithm might give me a better result. </p>
<p>There was a paper on this recently. <a href="https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf" rel="nofollow">https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf</a></p>
<p>I know Accord.net has both deep neural networks and RL but has anyone combined them together? Any insights? </p> | 2015-12-12 23:00:13.267000+00:00 | 2015-12-17 22:49:58.690000+00:00 | null | deep-learning|reinforcement-learning|accord.net|q-learning|sarsa | ['http://rdcu.be/cdlg', 'https://sites.google.com/a/deepmind.com/dqn/', 'https://github.com/tambetm/simple_dqn', 'http://arxiv.org/pdf/1509.02971v2.pdf'] | 4 |
41,063,820 | <p>This can be solved in many ways; check <a href="http://blog.datadive.net/interpreting-random-forests/" rel="nofollow noreferrer">http://blog.datadive.net/interpreting-random-forests/</a> (and a Python package for that: <a href="https://github.com/andosa/treeinterpreter" rel="nofollow noreferrer">https://github.com/andosa/treeinterpreter</a>). There are also less direct options, e.g.</p>
<ul>
<li><a href="https://arxiv.org/abs/1606.05390" rel="nofollow noreferrer">https://arxiv.org/abs/1606.05390</a> (implementation: <a href="https://github.com/sato9hara/defragTrees" rel="nofollow noreferrer">https://github.com/sato9hara/defragTrees</a>)</li>
<li><a href="https://arxiv.org/abs/1611.05722" rel="nofollow noreferrer">https://arxiv.org/abs/1611.05722</a> (implementation: <a href="https://github.com/IBCNServices/GENESIM" rel="nofollow noreferrer">https://github.com/IBCNServices/GENESIM</a>)</li>
</ul> | 2016-12-09 15:32:06.293000+00:00 | 2016-12-09 15:32:06.293000+00:00 | null | null | 41,060,913 | <p>I am using sklearn RFC.</p>
<pre><code>forest.fit(training_data, y_train)
probas_test = forest.predict_proba(test_data)
</code></pre>
<p>I wanted to know is there a way to find the contribution / importance of each features which lead to the prediction.</p>
<p>something like , but for an individual datapoint level.</p>
<pre><code> forest.feature_importances_
</code></pre> | 2016-12-09 12:50:27.500000+00:00 | 2017-04-02 12:39:52.500000+00:00 | 2017-04-02 12:39:52.500000+00:00 | machine-learning|scikit-learn|random-forest | ['http://blog.datadive.net/interpreting-random-forests/', 'https://github.com/andosa/treeinterpreter', 'https://arxiv.org/abs/1606.05390', 'https://github.com/sato9hara/defragTrees', 'https://arxiv.org/abs/1611.05722', 'https://github.com/IBCNServices/GENESIM'] | 6 |
48,836,420 | <p>For precisely two reasons:</p>
<ul>
<li>There is no one sequence of compiler optimizations that can simultaneously optimize all possible program characteristics of interest, such as execution time, compilation time, code size, energy consumption, binary-portability, etc. In compiler optimizations research, this is known as the phase ordering problem.</li>
<li>Most developers do not want to bother with figuring out which compiler optimizations to use and in what order; they just want to use whatever is generally recommended in a small number of common scenarios.</li>
</ul>
<p>That's why compiler developers have decided to offer a small collection of optimization levels from which developers can easily choose in general, yet <a href="https://hal.inria.fr/inria-00451106/" rel="nofollow noreferrer">offering hundreds of fine-grained optimization options</a> for advanced scenarios.</p>
<p>The term "optimization levels" is really a misnomer, since they are not exactly "levels" with respect to each other. A better term would be something like "optimization groups".</p>
<p><a href="http://users.elis.ugent.be/~leeckhou/papers/cgo08.pdf" rel="nofollow noreferrer">Designing optimization levels</a> is a complicated matter for compilers that target a broad range of programs and architectures, such as GCC, Clang, icc, and VC++. Many research papers have been published in the past decade that show that the optimization levels offered by compilers are far from being the best for a particular program, target architecture, or specific collection thereof. This motivated a line of research on <a href="https://arxiv.org/abs/1801.04405" rel="nofollow noreferrer">compiler auto-tuning</a>, which can be considered as an approach that falls somewhere in between offering few optimization levels and offering fine-grained control over compiler optimizations.</p>
<p>In summary, optimization levels provide an important convenience for developers, which will be required for many decades to come.</p> | 2018-02-16 23:42:48.037000+00:00 | 2018-02-16 23:42:48.037000+00:00 | null | null | 48,454,344 | <p>I was just thinking why C++ compilers have many optimization levels like O1, O2 etc. Why can everything be part of just one optimization level O. </p>
<p>I tried search online a lot but didn't got a convincing answer for this.</p> | 2018-01-26 00:58:11.863000+00:00 | 2018-02-16 23:53:34.310000+00:00 | 2018-01-26 01:36:08.887000+00:00 | c++|compiler-optimization | ['https://hal.inria.fr/inria-00451106/', 'http://users.elis.ugent.be/~leeckhou/papers/cgo08.pdf', 'https://arxiv.org/abs/1801.04405'] | 3 |
54,558,261 | <p>The code looks like it's using the implementation correctly. To answer your last question, </p>
<blockquote>
<p>Can see the training accuracy is much lower 84.09833333333333 versus 9.93 . Should the learning rate finder find a learning rate that allows to achieve greater training set accuracy ?</p>
</blockquote>
<p>Not really. A few points</p>
<ol>
<li><p>You are using Adam, which scales the learning rate adaptively for each parameter in the network. The initial learning rate will matter less, as opposed to traditional SGD, for example. The original authors of Adam write </p>
<blockquote>
<p>The hyper-parameters have intuitive interpre-tations and typically require little tuning. [1]</p>
</blockquote></li>
<li><p>A well tuned learning rate should make your network converge faster (i.e in less epochs). It might still find the same local minima as a higher learning rate, but faster. The risk with too high learning rates is that you overshoot the local minima and instead find a poor one. With a tiny learning rate you should get the best training accuracy, but it will take very long.</p></li>
<li><p>You are training your model for only 2 epochs. If I had to guess, the algorithm has found that a small learning rate leads to good optima, but since it is small, it requires more time to converge. To test this theory, I would recommend running your training longer. </p></li>
</ol>
<p>All that said, your time is probably better spent using Adam with default parameters and directing your attention elsewhere, such as modelling choices (layers, nodes, activations, etc). In my experience standard Adam works really well in most cases.</p>
<p>[1] <a href="https://arxiv.org/abs/1412.6980" rel="nofollow noreferrer">https://arxiv.org/abs/1412.6980</a></p> | 2019-02-06 16:27:46.553000+00:00 | 2019-02-06 16:27:46.553000+00:00 | null | null | 54,553,388 | <p>Using implementation of lr_finder from <a href="https://github.com/davidtvs/pytorch-lr-finder" rel="nofollow noreferrer">https://github.com/davidtvs/pytorch-lr-finder</a> based on paper <a href="https://arxiv.org/abs/1506.01186" rel="nofollow noreferrer">https://arxiv.org/abs/1506.01186</a></p>
<p>Without the learning rate finder : </p>
<pre><code>from __future__ import print_function, with_statement, division
import torch
from tqdm.autonotebook import tqdm
from torch.optim.lr_scheduler import _LRScheduler
import matplotlib.pyplot as plt
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import torch.utils.data as data_utils
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_moons
from matplotlib import pyplot
from pandas import DataFrame
import torchvision.datasets as dset
import os
import torch.nn.functional as F
import time
import random
import pickle
from sklearn.metrics import confusion_matrix
import pandas as pd
import sklearn
class LRFinder(object):
"""Learning rate range test.
The learning rate range test increases the learning rate in a pre-training run
between two boundaries in a linear or exponential manner. It provides valuable
information on how well the network can be trained over a range of learning rates
and what is the optimal learning rate.
Arguments:
model (torch.nn.Module): wrapped model.
optimizer (torch.optim.Optimizer): wrapped optimizer where the defined learning
is assumed to be the lower boundary of the range test.
criterion (torch.nn.Module): wrapped loss function.
device (str or torch.device, optional): a string ("cpu" or "cuda") with an
optional ordinal for the device type (e.g. "cuda:X", where is the ordinal).
Alternatively, can be an object representing the device on which the
computation will take place. Default: None, uses the same device as `model`.
Example:
>>> lr_finder = LRFinder(net, optimizer, criterion, device="cuda")
>>> lr_finder.range_test(dataloader, end_lr=100, num_iter=100)
Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186
fastai/lr_find: https://github.com/fastai/fastai
"""
def __init__(self, model, optimizer, criterion, device=None):
self.model = model
self.optimizer = optimizer
self.criterion = criterion
self.history = {"lr": [], "loss": []}
self.best_loss = None
# Save the original state of the model and optimizer so they can be restored if
# needed
self.model_state = model.state_dict()
self.model_device = next(self.model.parameters()).device
self.optimizer_state = optimizer.state_dict()
# If device is None, use the same as the model
if device:
self.device = device
else:
self.device = self.model_device
def reset(self):
"""Restores the model and optimizer to their initial states."""
self.model.load_state_dict(self.model_state)
self.model.to(self.model_device)
self.optimizer.load_state_dict(self.optimizer_state)
def range_test(
self,
train_loader,
val_loader=None,
end_lr=10,
num_iter=100,
step_mode="exp",
smooth_f=0.05,
diverge_th=5,
):
"""Performs the learning rate range test.
Arguments:
train_loader (torch.utils.data.DataLoader): the training set data laoder.
val_loader (torch.utils.data.DataLoader, optional): if `None` the range test
will only use the training loss. When given a data loader, the model is
evaluated after each iteration on that dataset and the evaluation loss
is used. Note that in this mode the test takes significantly longer but
generally produces more precise results. Default: None.
end_lr (float, optional): the maximum learning rate to test. Default: 10.
num_iter (int, optional): the number of iterations over which the test
occurs. Default: 100.
step_mode (str, optional): one of the available learning rate policies,
linear or exponential ("linear", "exp"). Default: "exp".
smooth_f (float, optional): the loss smoothing factor within the [0, 1[
interval. Disabled if set to 0, otherwise the loss is smoothed using
exponential smoothing. Default: 0.05.
diverge_th (int, optional): the test is stopped when the loss surpasses the
threshold: diverge_th * best_loss. Default: 5.
"""
# Reset test results
self.history = {"lr": [], "loss": []}
self.best_loss = None
# Move the model to the proper device
self.model.to(self.device)
# Initialize the proper learning rate policy
if step_mode.lower() == "exp":
lr_schedule = ExponentialLR(self.optimizer, end_lr, num_iter)
elif step_mode.lower() == "linear":
lr_schedule = LinearLR(self.optimizer, end_lr, num_iter)
else:
raise ValueError("expected one of (exp, linear), got {}".format(step_mode))
if smooth_f < 0 or smooth_f >= 1:
raise ValueError("smooth_f is outside the range [0, 1[")
# Create an iterator to get data batch by batch
iterator = iter(train_loader)
for iteration in tqdm(range(num_iter)):
# Get a new set of inputs and labels
try:
inputs, labels = next(iterator)
except StopIteration:
iterator = iter(train_loader)
inputs, labels = next(iterator)
# Train on batch and retrieve loss
loss = self._train_batch(inputs, labels)
if val_loader:
loss = self._validate(val_loader)
# Update the learning rate
lr_schedule.step()
self.history["lr"].append(lr_schedule.get_lr()[0])
# Track the best loss and smooth it if smooth_f is specified
if iteration == 0:
self.best_loss = loss
else:
if smooth_f > 0:
loss = smooth_f * loss + (1 - smooth_f) * self.history["loss"][-1]
if loss < self.best_loss:
self.best_loss = loss
# Check if the loss has diverged; if it has, stop the test
self.history["loss"].append(loss)
if loss > diverge_th * self.best_loss:
print("Stopping early, the loss has diverged")
break
print("Learning rate search finished. See the graph with {finder_name}.plot()")
def _train_batch(self, inputs, labels):
# Set model to training mode
# self.model.train()
# Move data to the correct device
inputs = inputs.to(self.device)
labels = labels.to(self.device)
# Forward pass
self.optimizer.zero_grad()
outputs = self.model(inputs)
loss = self.criterion(outputs, labels)
# Backward pass
loss.backward()
self.optimizer.step()
return loss.item()
def _validate(self, dataloader):
# Set model to evaluation mode and disable gradient computation
running_loss = 0
self.model.eval()
with torch.no_grad():
for inputs, labels in dataloader:
# Move data to the correct device
inputs = inputs.to(self.device)
labels = labels.to(self.device)
# Forward pass and loss computation
outputs = self.model(inputs)
loss = self.criterion(outputs, labels)
running_loss += loss.item() * inputs.size(0)
return running_loss / len(dataloader.dataset)
def plot(self, skip_start=10, skip_end=5, log_lr=True):
"""Plots the learning rate range test.
Arguments:
skip_start (int, optional): number of batches to trim from the start.
Default: 10.
skip_end (int, optional): number of batches to trim from the start.
Default: 5.
log_lr (bool, optional): True to plot the learning rate in a logarithmic
scale; otherwise, plotted in a linear scale. Default: True.
"""
if skip_start < 0:
raise ValueError("skip_start cannot be negative")
if skip_end < 0:
raise ValueError("skip_end cannot be negative")
# Get the data to plot from the history dictionary. Also, handle skip_end=0
# properly so the behaviour is the expected
lrs = self.history["lr"]
losses = self.history["loss"]
if skip_end == 0:
lrs = lrs[skip_start:]
losses = losses[skip_start:]
else:
lrs = lrs[skip_start:-skip_end]
losses = losses[skip_start:-skip_end]
# Plot loss as a function of the learning rate
plt.plot(lrs, losses)
if log_lr:
plt.xscale("log")
plt.xlabel("Learning rate")
plt.ylabel("Loss")
plt.show()
class LinearLR(_LRScheduler):
"""Linearly increases the learning rate between two boundaries over a number of
iterations.
Arguments:
optimizer (torch.optim.Optimizer): wrapped optimizer.
end_lr (float, optional): the initial learning rate which is the lower
boundary of the test. Default: 10.
num_iter (int, optional): the number of iterations over which the test
occurs. Default: 100.
last_epoch (int): the index of last epoch. Default: -1.
"""
def __init__(self, optimizer, end_lr, num_iter, last_epoch=-1):
self.end_lr = end_lr
self.num_iter = num_iter
super(LinearLR, self).__init__(optimizer, last_epoch)
def get_lr(self):
curr_iter = self.last_epoch + 1
r = curr_iter / self.num_iter
return [base_lr + r * (self.end_lr - base_lr) for base_lr in self.base_lrs]
class ExponentialLR(_LRScheduler):
"""Exponentially increases the learning rate between two boundaries over a number of
iterations.
Arguments:
optimizer (torch.optim.Optimizer): wrapped optimizer.
end_lr (float, optional): the initial learning rate which is the lower
boundary of the test. Default: 10.
num_iter (int, optional): the number of iterations over which the test
occurs. Default: 100.
last_epoch (int): the index of last epoch. Default: -1.
"""
def __init__(self, optimizer, end_lr, num_iter, last_epoch=-1):
self.end_lr = end_lr
self.num_iter = num_iter
super(ExponentialLR, self).__init__(optimizer, last_epoch)
def get_lr(self):
curr_iter = self.last_epoch + 1
r = curr_iter / self.num_iter
return [base_lr * (self.end_lr / base_lr) ** r for base_lr in self.base_lrs]
trans = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (1.0,))])
root = './data'
if not os.path.exists(root):
os.mkdir(root)
train_set = dset.MNIST(root=root, train=True, transform=trans, download=True)
test_set = dset.MNIST(root=root, train=False, transform=trans, download=True)
batch_size = 64
train_loader = torch.utils.data.DataLoader(
dataset=train_set,
batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(
dataset=test_set,
batch_size=batch_size,
shuffle=True)
class NeuralNet(nn.Module):
def __init__(self):
super(NeuralNet, self).__init__()
self.fc1 = nn.Linear(28*28, 500)
self.fc2 = nn.Linear(500, 256)
self.fc3 = nn.Linear(256, 10)
def forward(self, x):
x = x.view(-1, 28*28)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
num_epochs = 2
random_sample_size = 200
# Hyper-parameters
input_size = 100
hidden_size = 100
num_classes = 10
learning_rate = .0001
# Device configuration
device = 'cpu'
model = NeuralNet().to(device)
# Loss and optimizer
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
# lr_finder = LRFinder(model, optimizer, criterion, device="cpu")
# lr_finder.range_test(train_loader, end_lr=100, num_iter=100)
# lr_finder.plot()
# optimizer = torch.optim.Adam(model.parameters(), lr=lr_finder.history['lr'][0])
# print(lr_finder.history['lr'])
predicted_test = []
labels_l = []
actual_values = []
predicted_values = []
N = len(train_loader)
# Train the model
total_step = len(train_loader)
for epoch in range(num_epochs):
for i, (images, labels) in enumerate(train_loader):
# Move tensors to the configured device
# images = images.reshape(-1, 50176).to(device)
images = images.to(device)
labels = labels.to(device)
# Forward pass
outputs = model(images)
predicted = outputs.data.max(1)[1]
predicted_test.append(predicted.cpu().numpy())
labels_l.append(labels.cpu().numpy())
loss = criterion(outputs, labels)
# Backward and optimize
optimizer.zero_grad()
loss.backward()
optimizer.step()
predicted_values.append(np.concatenate(predicted_test).ravel())
actual_values.append(np.concatenate(labels_l).ravel())
print ('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, i+1, total_step, loss.item()))
print('training accuracy : ', 100 * len((np.where(np.array(predicted_values[0])==(np.array(actual_values[0])))[0])) / len(actual_values[0]))
</code></pre>
<p>Results :</p>
<pre><code>Epoch [1/2], Step [938/938], Loss: 0.5374
training accuracy : 84.09833333333333
Epoch [2/2], Step [938/938], Loss: 0.2055
training accuracy : 84.09833333333333
</code></pre>
<p>With the learning rate finder code being uncommented : </p>
<p>Below commented out code is now not un-commented : </p>
<pre><code>criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
lr_finder = LRFinder(model, optimizer, criterion, device="cpu")
lr_finder.range_test(train_loader, end_lr=100, num_iter=100)
lr_finder.plot()
optimizer = torch.optim.Adam(model.parameters(), lr=lr_finder.history['lr'][0])
print(lr_finder.history['lr'])
</code></pre>
<p>the model achieves results after two epochs :</p>
<pre><code>Epoch [1/2], Step [938/938], Loss: 3.7311
training accuracy : 9.93
Epoch [2/2], Step [938/938], Loss: 3.5106
training accuracy : 9.93
</code></pre>
<p>Can see the training accuracy is much lower <code>84.09833333333333</code> versus <code>9.93</code> . Should the learning rate finder find a learning rate that allows to achieve greater training set accuracy ?</p> | 2019-02-06 12:12:17.490000+00:00 | 2019-02-06 16:27:46.553000+00:00 | null | deep-learning|computer-vision|pytorch|mnist|fast-ai | ['https://arxiv.org/abs/1412.6980'] | 1 |
34,488,957 | <p>This sounds pretty similar to what I did in my Bachelors thesis about "<a href="http://arxiv.org/abs/1511.09030" rel="nofollow">On-line Recognition of Handwritten Mathematical Symbols</a>".</p>
<p>You can recognize those patterns with a <a href="https://en.wikipedia.org/wiki/Artificial_neural_network" rel="nofollow">neural network</a>. Interpolate the lines, normalize the points on a line to a fixed number, take the (x,y) coordinates as input features and the types of shape as output nodes (one node for circle, one node for triangle, ...).</p>
<p>You can create such a network with TensorFlow. Here are <a href="http://martin-thoma.com/tensor-flow-quick/" rel="nofollow">my two cents about TensorFlow</a>.</p>
<h2>Dynamic Time Warping</h2>
<p>This is a pattern matching approach. See my bachelors thesis or <a href="https://en.wikipedia.org/wiki/Dynamic_time_warping" rel="nofollow">wikipedia</a>.</p>
<h2>Alternatives to Machine Learning</h2>
<p>If you want something simpler and if you only have a tiny amount of classes (e.g. < 30), then you could probably also hand-engineer an algorithm. I recommend to have a look at the <a href="https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm" rel="nofollow">Douglas-Peucker algorithm</a> to find the most important points. I've described it in my bachelor's thesis. When you go to <a href="http://www.martin-thoma.de/write-math/render/?raw_data_id=331161&show_points=on&dot_reduction_threshold=0.2&dehooking_threshold=20&minimum_time_delay_filter_constant=10&smoothing_applications=1&smooth1=0&smooth2=1&smooth3=0&douglas_peucker=on&epsilon=10&cubic_spline_points=20" rel="nofollow">this interactive preprocessing page</a> you can get a feeling for this algorithm (you can draw something on write-math.com, click on "Drawing" below the canvas, click on "Preprocessing" and apply it by checking the checkbox)</p>
<h2>See also</h2>
<ul>
<li><a href="http://www.cs.virginia.edu/~xj3a/research/publications/PG02.pdf" rel="nofollow">On-line Graphics Recognition</a></li>
<li><a href="http://lib.tkk.fi/Dipl/2011/urn100500.pdf" rel="nofollow">Online Sketch Recognition: Geometric Shapes</a></li>
</ul> | 2015-12-28 06:55:28.773000+00:00 | 2015-12-28 07:40:43.050000+00:00 | 2015-12-28 07:40:43.050000+00:00 | null | 34,488,498 | <p>I have a python list of coordinates saved like this : <code>[(34,55),(44,66)....]</code>.
This list indicates a hand drawn line on the screen. Now I need to check if this line/shape matches with some pre saved similar lists of basic shapes like square, circle, triangle, etc. (Basically I need to recognise user gestures.) Please suggest some machine learning technique to achieve it. Suggest the link if this is a duplicate. (I prefer a python solution for this.)</p>
<p>P.S : The shape that user is inputting comes from a camera/video. It is the path traversed by an object I am tracking with opencv. Now I need to figure what shapes the user is drawing by waving the object in front of the camera.</p> | 2015-12-28 06:11:02.390000+00:00 | 2015-12-28 07:40:43.050000+00:00 | null | python|opencv|numpy|machine-learning|pattern-matching | ['http://arxiv.org/abs/1511.09030', 'https://en.wikipedia.org/wiki/Artificial_neural_network', 'http://martin-thoma.com/tensor-flow-quick/', 'https://en.wikipedia.org/wiki/Dynamic_time_warping', 'https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm', 'http://www.martin-thoma.de/write-math/render/?raw_data_id=331161&show_points=on&dot_reduction_threshold=0.2&dehooking_threshold=20&minimum_time_delay_filter_constant=10&smoothing_applications=1&smooth1=0&smooth2=1&smooth3=0&douglas_peucker=on&epsilon=10&cubic_spline_points=20', 'http://www.cs.virginia.edu/~xj3a/research/publications/PG02.pdf', 'http://lib.tkk.fi/Dipl/2011/urn100500.pdf'] | 8 |
48,618,732 | <p>I recently read this paper: <a href="https://arxiv.org/abs/1708.05801" rel="nofollow noreferrer">Semantic Relatedness of Words and Phrases</a></p>
<p>Table 1 on p.3. shows how they used a weighting scheme. They then use the total weighted connections to decide if two words are related.</p>
<p>As far as I am aware, there is no ready-made function in nltk to do this.</p> | 2018-02-05 08:53:32.577000+00:00 | 2018-02-05 08:53:32.577000+00:00 | null | null | 48,611,488 | <p>StackOverflow!</p>
<p>I searched on stack but I have not found any response about my doubt. My question is follow:</p>
<p>There are any measure of similarity for Wordnet which explores (navigate) holonym / meronym and hypernym / hyponym edges at the same time? I have found only measures which look for common hypernyms vertex on Wordnet ...</p>
<p>My question not contains a snippet of code, it's only about a Wordnet feature.</p>
<p>UPDATE:
I'm searching for a measure which not only use 'is-a' for find two concepts for semantic comparation. I want some measure which, in some cases, for "bind" two concepts admits "skip" 'is-a' taxonomy until reach most close hyperonym and choose navigate in 'member of'(holonyms/meronyms) taxonomy under some justificative.</p>
<p>Thanks in advance.</p> | 2018-02-04 18:16:48.650000+00:00 | 2018-02-08 14:29:45.783000+00:00 | 2018-02-08 14:29:45.783000+00:00 | nlp|nltk|wordnet|word-sense-disambiguation | ['https://arxiv.org/abs/1708.05801'] | 1 |
55,389,725 | <p>It turns out that in fact <strong>all</strong> of the peers can be Byzantine (same for the clients as well).</p>
<p>This is precisely stated in the <a href="https://arxiv.org/pdf/1801.10228.pdf" rel="nofollow noreferrer">Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains</a> paper, section 3.5 (Trust and Fault Model). The integrity of HLF relies <strong>solely</strong> on the orderers. This is because even if all peers collude and try to rewrite history in the blockchain, they won't be able to produce signed blocks (as the orderers are the only entities that can make blocks). </p>
<p>The best they can do is to try to delete blocks, but even with the presence of a <em>single</em> honest peer, that peer will show a "longer" history of blocks which will be the accepted one.</p> | 2019-03-28 03:28:37.287000+00:00 | 2019-03-28 03:28:37.287000+00:00 | null | null | 55,273,967 | <p>This is a more theoretical question than a practical one, but I was thinking on possible attacks in Hyperledger Fabric.</p>
<p>On a high level, orderers are the block makers, and the whole blockchain is eventually maintained by the peers. The consensus algorithm is executed among the orderers (which might tolerate up to a certain number of byzantine orderers if the consensus is byzantine fault tolerant). </p>
<p>But what happens if some peers are compromised? What would happen if an attacker subverts more than half of the peers in the system? Could it result in a chain fork or reorganization?</p> | 2019-03-21 04:50:52.823000+00:00 | 2019-03-28 03:28:37.287000+00:00 | null | hyperledger-fabric|hyperledger | ['https://arxiv.org/pdf/1801.10228.pdf'] | 1 |
37,254,210 | <p>I'm working on the same thing. Tensorflow with a deep neural network is all that's needed. I believe convolutional LSTM neural networks can take weather data as an input and give a prediction as an output. You just need historical data to train it. Maybe an almanac combined with forecasts and measurement data at the time of forecast.</p>
<p>Research has shown that Convolutional Long Short Term Memory (ConvLSTM) algorithm is more accurate at predicting precicipation than FC_LSTM and current state of the art ROVER algorithms. Here's the paper: <a href="https://arxiv.org/abs/1506.04214" rel="nofollow noreferrer">https://arxiv.org/abs/1506.04214</a></p>
<p>Research also shows that wind can be predicted using NOAA's data and the machine learning algorithms predict better than NOAA. The paper is here: <a href="http://aditya-grover.github.io/files/publications/kdd15.pdf" rel="nofollow noreferrer">http://aditya-grover.github.io/files/publications/kdd15.pdf</a></p>
<p>And finally research has shown that temperature, humidity, and wind can be accurately predicted out to 72hrs using a 15 year data period of hourly measurements. Everything needed to train an algo is spelled out in this article: Sequence to Sequence Weather Forecasting with Long
Short-Term Memory Recurrent Neural Networks, International Journal of Computer Applications (0975 - 8887)
Volume 143 - No.11, June 2016</p> | 2016-05-16 12:43:46.440000+00:00 | 2017-01-01 01:41:29.777000+00:00 | 2017-01-01 01:41:29.777000+00:00 | null | 33,747,026 | <p>I am trying to write a demand forecast that considers weather data (temperature, pressure, humidity) one by one (or all together). I want to use machine learning algorithms to do so. I used Linear Regression to do the demand forecast previously not considering the weather data, now that I have weather data I am not sure which machine learning algorithm should I use to do the task? I am newbie in Machine Learning and would be grateful if you help me figuring out this problem.</p>
<p>I am using Python for my code, so if you can direct me to use any specific module that would be great.</p> | 2015-11-17 00:16:11.423000+00:00 | 2018-01-18 16:12:28.117000+00:00 | 2015-11-19 00:36:46.397000+00:00 | python|machine-learning|forecasting|weather | ['https://arxiv.org/abs/1506.04214', 'http://aditya-grover.github.io/files/publications/kdd15.pdf'] | 2 |
72,327,124 | <p>Here are three tests mentioned in <a href="https://www.robots.ox.ac.uk/%7Eian/Teaching/Estimation/LectureNotes2.pdf" rel="nofollow noreferrer">[1]</a>:</p>
<ol>
<li><strong>Innovation magnitude bound test:</strong> basicaly Comparing innovations against sigma points obtained from matrix S.</li>
<li><strong>Chi-squared test on nomalized innovations squared:</strong> The normalied innovation squared is supposed to have a chi-squared distribution.</li>
<li><strong>Innovation whiteness (autocorrelation) test:</strong> Innovations are supposed to be white, hence the test on whiteness.</li>
</ol>
<p>All the three are well explained in <a href="https://www.robots.ox.ac.uk/%7Eian/Teaching/Estimation/LectureNotes2.pdf" rel="nofollow noreferrer">[1]</a>. Another useful reference for performing the second test is <a href="https://arxiv.org/pdf/1807.08855.pdf" rel="nofollow noreferrer">[2]</a>.</p> | 2022-05-21 06:12:59.713000+00:00 | 2022-05-21 06:12:59.713000+00:00 | null | null | 24,550,423 | <p>Statistical test is required in terms of residual check.</p>
<p>How to test residual in matlab? I was plotting it.</p>
<ol>
<li>I found that innovation term is oscillating considerably around zero in a zig zag manner.</li>
<li>Is innovation expected to zero? If yes how to do that?</li>
</ol> | 2014-07-03 09:51:04.837000+00:00 | 2022-05-21 06:12:59.713000+00:00 | 2014-07-03 10:12:14.843000+00:00 | matlab|kalman-filter|innovation | ['https://www.robots.ox.ac.uk/%7Eian/Teaching/Estimation/LectureNotes2.pdf', 'https://www.robots.ox.ac.uk/%7Eian/Teaching/Estimation/LectureNotes2.pdf', 'https://arxiv.org/pdf/1807.08855.pdf'] | 3 |