name
stringlengths 15
255
| question
stringlengths 20
1.77k
| questionUpvotes
int64 0
23
| timeCreated
stringlengths 24
24
| answer
stringlengths 9
1.09k
| answerUpvotes
int64 0
75
| timeAnswered
stringlengths 24
24
| answerURL
stringlengths 50
285
| context
stringlengths 244
1.73k
| answer_start
int64 0
3.45k
| answers
stringlengths 46
1.14k
|
---|---|---|---|---|---|---|---|---|---|---|
Is there similar pytorch function as model.summary() as keras? | is there similar pytorch function as model.summary() as keras? | 1 | 2017-05-05T02:41:01.133Z | For a given input shape, you can use the <a href="https://stackoverflow.com/a/66984386/9067615" rel="noopener nofollow ugc">torchinfo</a> (formerly torchsummary) package:
Torchinfo provides information complementary to what is provided by print(your_model) in PyTorch, similar to Tensorflow’s model.summary()…
Example:
from torchinfo import summary
model = ConvNet()
batch_size = 1… | 0 | 2021-05-08T07:43:54.325Z | https://discuss.pytorch.org/t/is-there-similar-pytorch-function-as-model-summary-as-keras/2678/16 | Use PyTorch’s <a href="https://en.wikipedia.org/wiki/Hessian_matrix" rel="noopener nofollow ugc">autograd.functional</a> library:
torch.autograd.functional.hessian(func, inputs) For a given input shape, you can use the <a href="https://stackoverflow.com/a/66984386/9067615" rel="noopener nofollow ugc">torchinfo</a> (formerly torchsummary) package:
Torchinfo provides information complementary to what is provided by print(your_model) in PyTorch, similar to Tensorflow’s model.summary()…
Example:
from torchinfo import summary
model = ConvNet()
batch_size = 1… Please see my experiment using a linear model bellow.
<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/3X/e/6/e62aaed0c93e388566e5d85578b0bca426f42fb4.png" data-download-href="https://discuss.pytorch.org/uploads/default/e62aaed0c93e388566e5d85578b0bca426f42fb4" title="summary fig">[summary fig]</a>
<a href="https://gist.github.com/Tony-Y/9e3687fbe10e817596d1e1ed58c9f191" class="inline-onebox" rel="noopener nofollow ugc">MultiTaskLoss.ipynb · GitHub</a>
In this experiment, I used torch.stack instead of torch.Tensor to fix the reported bug of my original code as the following:
total_loss = torch.stack(loss) * torch.exp(-self.eta) + self.eta
total_l… | 1,075 | {'text': ['For a given input shape, you can use the <a href="https://stackoverflow.com/a/66984386/9067615" rel="noopener nofollow ugc">torchinfo</a> (formerly torchsummary) package:\n\nTorchinfo provides information complementary to what is provided by print(your_model) in PyTorch, similar to Tensorflow’s model.summary()…\n\nExample:\n\nfrom torchinfo import summary\n\nmodel = ConvNet()\n\nbatch_size = 1…'], 'answer_start': [1075]} |
How to learn the weights between two losses? | I am reproducing the paper " Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics". The loss function is defined as
[loss7]
This means that W and σ are the learned parameters of the network. We are the weights of the network while σ are used to calculate the weigh… | 3 | 2019-03-13T00:37:46.464Z | Please see my experiment using a linear model bellow.
<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/3X/e/6/e62aaed0c93e388566e5d85578b0bca426f42fb4.png" data-download-href="https://discuss.pytorch.org/uploads/default/e62aaed0c93e388566e5d85578b0bca426f42fb4" title="summary fig">[summary fig]</a>
<a href="https://gist.github.com/Tony-Y/9e3687fbe10e817596d1e1ed58c9f191" class="inline-onebox" rel="noopener nofollow ugc">MultiTaskLoss.ipynb · GitHub</a>
In this experiment, I used torch.stack instead of torch.Tensor to fix the reported bug of my original code as the following:
total_loss = torch.stack(loss) * torch.exp(-self.eta) + self.eta
total_l… | 2 | 2021-06-19T03:20:17.252Z | https://discuss.pytorch.org/t/how-to-learn-the-weights-between-two-losses/39681/43 | Use PyTorch’s <a href="https://en.wikipedia.org/wiki/Hessian_matrix" rel="noopener nofollow ugc">autograd.functional</a> library:
torch.autograd.functional.hessian(func, inputs) For a given input shape, you can use the <a href="https://stackoverflow.com/a/66984386/9067615" rel="noopener nofollow ugc">torchinfo</a> (formerly torchsummary) package:
Torchinfo provides information complementary to what is provided by print(your_model) in PyTorch, similar to Tensorflow’s model.summary()…
Example:
from torchinfo import summary
model = ConvNet()
batch_size = 1… Please see my experiment using a linear model bellow.
<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/3X/e/6/e62aaed0c93e388566e5d85578b0bca426f42fb4.png" data-download-href="https://discuss.pytorch.org/uploads/default/e62aaed0c93e388566e5d85578b0bca426f42fb4" title="summary fig">[summary fig]</a>
<a href="https://gist.github.com/Tony-Y/9e3687fbe10e817596d1e1ed58c9f191" class="inline-onebox" rel="noopener nofollow ugc">MultiTaskLoss.ipynb · GitHub</a>
In this experiment, I used torch.stack instead of torch.Tensor to fix the reported bug of my original code as the following:
total_loss = torch.stack(loss) * torch.exp(-self.eta) + self.eta
total_l… | 574 | {'text': ['Please see my experiment using a linear model bellow.\n\n<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/3X/e/6/e62aaed0c93e388566e5d85578b0bca426f42fb4.png" data-download-href="https://discuss.pytorch.org/uploads/default/e62aaed0c93e388566e5d85578b0bca426f42fb4" title="summary fig">[summary fig]</a>\n\n<a href="https://gist.github.com/Tony-Y/9e3687fbe10e817596d1e1ed58c9f191" class="inline-onebox" rel="noopener nofollow ugc">MultiTaskLoss.ipynb · GitHub</a>\n\nIn this experiment, I used torch.stack instead of torch.Tensor to fix the reported bug of my original code as the following:\n\ntotal_loss = torch.stack(loss) * torch.exp(-self.eta) + self.eta\n\ntotal_l…'], 'answer_start': [574]} |
Resnet last layer modification | Hello guys, I’m trying to add a dropout layer before the FC layer in the “bottom” of my resnet. So, in order to do that, I remove the original FC layer from the resnet18 with the following code:
resnetk = models.resnet18(pretrained=True)
num_ftrs = resnetk.fc.in_features
resnetk = torch… | 2 | 2019-01-01T19:04:08.422Z | Currently you are rewrapping your pretrained resnet into a new nn.Sequential module, which will lose the forward definition. As you can see in <a href="https://github.com/pytorch/vision/blob/21153802a3086558e9385788956b0f2808b50e51/torchvision/models/resnet.py#L161" rel="nofollow noopener">this line of code</a> in the original resnet implementation, the activation x will be flattened before being passed to the last linear layer. Since this is missi… | 15 | 2019-01-03T19:13:07.369Z | https://discuss.pytorch.org/t/resnet-last-layer-modification/33530/2 | Currently you are rewrapping your pretrained resnet into a new nn.Sequential module, which will lose the forward definition. As you can see in <a href="https://github.com/pytorch/vision/blob/21153802a3086558e9385788956b0f2808b50e51/torchvision/models/resnet.py#L161" rel="nofollow noopener">this line of code</a> in the original resnet implementation, the activation x will be flattened before being passed to the last linear layer. Since this is missi… Would this do it?
import torch
from torchvision import transforms
mu = 2
std = 0.5
t = torch.Tensor([1,2,3])
(t - 2)/0.5
# or if t is an image
transforms.Normalize(2, 0.5)(t)
see:
<a href="https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize" class="onebox" target="_blank" rel="nofollow noopener">https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize</a> Broadcasting wasn’t available in version 0.1.12.
You could try:
tensor_vec = tensor_vec / tensor_vec.sum(0).expand_as(tensor_vec) | 2,542 | {'text': ['Currently you are rewrapping your pretrained resnet into a new nn.Sequential module, which will lose the forward definition. As you can see in <a href="https://github.com/pytorch/vision/blob/21153802a3086558e9385788956b0f2808b50e51/torchvision/models/resnet.py#L161" rel="nofollow noopener">this line of code</a> in the original resnet implementation, the activation x will be flattened before being passed to the last linear layer. Since this is missi…'], 'answer_start': [2542]} |
How to normalize a tensor to 0 mean and 1 variance? | Hi I’m currently converting a tensor to a numpy array just so I can use sklearn.preprocessing.scale
Is there a way to achieve this in PyTorch? I have seen there is torchvision.transforms.Normalize but I can’t work out how to use this outside of the context of a dataloader. (I’m trying to use this … | 1 | 2018-05-28T10:18:32.519Z | Would this do it?
import torch
from torchvision import transforms
mu = 2
std = 0.5
t = torch.Tensor([1,2,3])
(t - 2)/0.5
# or if t is an image
transforms.Normalize(2, 0.5)(t)
see:
<a href="https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize" class="onebox" target="_blank" rel="nofollow noopener">https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize</a> | 1 | 2018-05-28T11:21:43.177Z | https://discuss.pytorch.org/t/how-to-normalize-a-tensor-to-0-mean-and-1-variance/18766/3 | Currently you are rewrapping your pretrained resnet into a new nn.Sequential module, which will lose the forward definition. As you can see in <a href="https://github.com/pytorch/vision/blob/21153802a3086558e9385788956b0f2808b50e51/torchvision/models/resnet.py#L161" rel="nofollow noopener">this line of code</a> in the original resnet implementation, the activation x will be flattened before being passed to the last linear layer. Since this is missi… Would this do it?
import torch
from torchvision import transforms
mu = 2
std = 0.5
t = torch.Tensor([1,2,3])
(t - 2)/0.5
# or if t is an image
transforms.Normalize(2, 0.5)(t)
see:
<a href="https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize" class="onebox" target="_blank" rel="nofollow noopener">https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize</a> Broadcasting wasn’t available in version 0.1.12.
You could try:
tensor_vec = tensor_vec / tensor_vec.sum(0).expand_as(tensor_vec) | 1,732 | {'text': ['Would this do it?\n\nimport torch\n\nfrom torchvision import transforms\n\nmu = 2\n\nstd = 0.5\n\nt = torch.Tensor([1,2,3])\n\n(t - 2)/0.5\n\n# or if t is an image\n\ntransforms.Normalize(2, 0.5)(t)\n\nsee:\n\n<a href="https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize" class="onebox" target="_blank" rel="nofollow noopener">https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize</a>'], 'answer_start': [1732]} |
Normalize a vector to [0,1] | How to normalize a vector so all it’s values would be between 0 and 1 ([0,1])? | 2 | 2018-03-08T11:25:10.439Z | Broadcasting wasn’t available in version 0.1.12.
You could try:
tensor_vec = tensor_vec / tensor_vec.sum(0).expand_as(tensor_vec) | 5 | 2018-03-08T16:40:51.196Z | https://discuss.pytorch.org/t/normalize-a-vector-to-0-1/14594/8 | Currently you are rewrapping your pretrained resnet into a new nn.Sequential module, which will lose the forward definition. As you can see in <a href="https://github.com/pytorch/vision/blob/21153802a3086558e9385788956b0f2808b50e51/torchvision/models/resnet.py#L161" rel="nofollow noopener">this line of code</a> in the original resnet implementation, the activation x will be flattened before being passed to the last linear layer. Since this is missi… Would this do it?
import torch
from torchvision import transforms
mu = 2
std = 0.5
t = torch.Tensor([1,2,3])
(t - 2)/0.5
# or if t is an image
transforms.Normalize(2, 0.5)(t)
see:
<a href="https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize" class="onebox" target="_blank" rel="nofollow noopener">https://pytorch.org/docs/master/torchvision/transforms.html#torchvision.transforms.Normalize</a> Broadcasting wasn’t available in version 0.1.12.
You could try:
tensor_vec = tensor_vec / tensor_vec.sum(0).expand_as(tensor_vec) | 906 | {'text': ['Broadcasting wasn’t available in version 0.1.12.\n\nYou could try:\n\ntensor_vec = tensor_vec / tensor_vec.sum(0).expand_as(tensor_vec)'], 'answer_start': [906]} |
Writing a simple Gaussian noise layer in Pytorch | I wrote a simple noise layer for my network.
def gaussian_noise(inputs, mean=0, stddev=0.01):
input = inputs.cpu()
input_array = input.data.numpy()
noise = np.random.normal(loc=mean, scale=stddev, size=np.shape(input_array))
out = np.add(input_array, noise)
output_tensor = to… | 1 | 2017-07-07T10:57:11.309Z | Yes, you can move the mean by adding the mean to the output of the normal variable.
But, a maybe better way of doing it is to use the normal_ function as follows:
def gaussian(ins, is_training, mean, stddev):
if is_training:
noise = Variable(ins.data.new(ins.size()).normal_(mean, stdde… | 6 | 2017-07-09T10:55:17.878Z | https://discuss.pytorch.org/t/writing-a-simple-gaussian-noise-layer-in-pytorch/4694/2 | Yes, you can move the mean by adding the mean to the output of the normal variable.
But, a maybe better way of doing it is to use the normal_ function as follows:
def gaussian(ins, is_training, mean, stddev):
if is_training:
noise = Variable(ins.data.new(ins.size()).normal_(mean, stdde… Thanks for the code.
This should work:
AA = AA.view(A.size(0), -1)
AA -= AA.min(1, keepdim=True)[0]
AA /= AA.max(1, keepdim=True)[0]
AA = AA.view(batch_size, height, width) [image] KFrank:
First, are you using non-zero momentum or weight_decay ?
First of all, i am using the momentum in optimizer.
So i understand, the optimizer could update the parameters after i changed the requires_grad=False.
When i check the gradient is “after” calling optimizer.step().
As… | 2,074 | {'text': ['Yes, you can move the mean by adding the mean to the output of the normal variable.\n\nBut, a maybe better way of doing it is to use the normal_ function as follows:\n\ndef gaussian(ins, is_training, mean, stddev):\n\nif is_training:\n\nnoise = Variable(ins.data.new(ins.size()).normal_(mean, stdde…'], 'answer_start': [2074]} |
How to efficiently normalize a batch of tensor to [0, 1] | Hi,
I have a batch of tensor. How can I efficiently normalize it to the range of [0, 1].
For example,
The tensor is A with dimension [batch=25, height=3, width=3]. I can use for-loop to finish this normalization like
# batchwise normalize to [0, 1] along with height and width
for i in range(batc… | 0 | 2019-12-27T07:25:16.379Z | Thanks for the code.
This should work:
AA = AA.view(A.size(0), -1)
AA -= AA.min(1, keepdim=True)[0]
AA /= AA.max(1, keepdim=True)[0]
AA = AA.view(batch_size, height, width) | 8 | 2019-12-28T06:50:13.442Z | https://discuss.pytorch.org/t/how-to-efficiently-normalize-a-batch-of-tensor-to-0-1/65122/6 | Yes, you can move the mean by adding the mean to the output of the normal variable.
But, a maybe better way of doing it is to use the normal_ function as follows:
def gaussian(ins, is_training, mean, stddev):
if is_training:
noise = Variable(ins.data.new(ins.size()).normal_(mean, stdde… Thanks for the code.
This should work:
AA = AA.view(A.size(0), -1)
AA -= AA.min(1, keepdim=True)[0]
AA /= AA.max(1, keepdim=True)[0]
AA = AA.view(batch_size, height, width) [image] KFrank:
First, are you using non-zero momentum or weight_decay ?
First of all, i am using the momentum in optimizer.
So i understand, the optimizer could update the parameters after i changed the requires_grad=False.
When i check the gradient is “after” calling optimizer.step().
As… | 1,336 | {'text': ['Thanks for the code.\n\nThis should work:\n\nAA = AA.view(A.size(0), -1)\n\nAA -= AA.min(1, keepdim=True)[0]\n\nAA /= AA.max(1, keepdim=True)[0]\n\nAA = AA.view(batch_size, height, width)'], 'answer_start': [1336]} |
Parameters with requires_grad = False are updated during training | Hello. I’am trying to freeze front layers during training.
Before starting optimization, the optimizer is constructed by
optimizer = torch.optim.SGD(net.parameters(), lr, ...)
Then, during training, i changed the front layers’ requires_grad=False.
Specifically,
for epoch in range(total_epoch):
… | 4 | 2020-07-22T07:02:42.999Z | [image] KFrank:
First, are you using non-zero momentum or weight_decay ?
First of all, i am using the momentum in optimizer.
So i understand, the optimizer could update the parameters after i changed the requires_grad=False.
When i check the gradient is “after” calling optimizer.step().
As… | 0 | 2020-07-23T00:40:07.964Z | https://discuss.pytorch.org/t/parameters-with-requires-grad-false-are-updated-during-training/90096/5 | Yes, you can move the mean by adding the mean to the output of the normal variable.
But, a maybe better way of doing it is to use the normal_ function as follows:
def gaussian(ins, is_training, mean, stddev):
if is_training:
noise = Variable(ins.data.new(ins.size()).normal_(mean, stdde… Thanks for the code.
This should work:
AA = AA.view(A.size(0), -1)
AA -= AA.min(1, keepdim=True)[0]
AA /= AA.max(1, keepdim=True)[0]
AA = AA.view(batch_size, height, width) [image] KFrank:
First, are you using non-zero momentum or weight_decay ?
First of all, i am using the momentum in optimizer.
So i understand, the optimizer could update the parameters after i changed the requires_grad=False.
When i check the gradient is “after” calling optimizer.step().
As… | 477 | {'text': ['[image] KFrank:\n\nFirst, are you using non-zero momentum or weight_decay ?\n\nFirst of all, i am using the momentum in optimizer.\n\nSo i understand, the optimizer could update the parameters after i changed the requires_grad=False.\n\nWhen i check the gradient is “after” calling optimizer.step().\n\nAs…'], 'answer_start': [477]} |
ReduceLROnPlateau not doing anything? | I’m trying to use the ReduceLROnPlateau scheduler but it doesn’t do anything, i.e. not decrease the learning rate after my loss stops decreasing (and actually starts to increase over multiple epochs quite a bit).
Here is the code:
criterion = nn.MSELoss()
optimizer = optim.Adam(sel… | 2 | 2018-09-05T21:25:11.580Z | The patience is applied to the last minimal loss value and the subsequent values.
Let’s analyze the behavior for patience=0:
Until epoch10 the loss is decreasing (starting with epoch0).
The loss in epoch11 increases; since patience=0, we are decreasing the lr. The current min value is 10980 from … | 14 | 2018-09-05T22:44:25.983Z | https://discuss.pytorch.org/t/reducelronplateau-not-doing-anything/24575/10 | The patience is applied to the last minimal loss value and the subsequent values.
Let’s analyze the behavior for patience=0:
Until epoch10 the loss is decreasing (starting with epoch0).
The loss in epoch11 increases; since patience=0, we are decreasing the lr. The current min value is 10980 from … You could try to see the memory usage with the script posted in <a href="https://discuss.pytorch.org/t/how-pytorch-releases-variable-garbage/7277/2?u=ptrblck">this thread</a>.
Do you still run out of memory for batch_size=1 or are you currently testing batch_size=4?
Could you temporarily switch to an optimizer without tracking stats, e.g. optim.SGD? this was solved since Pytorch 1.10.0
“same” keyword is accepted as input for padding for conv2d | 1,560 | {'text': ['The patience is applied to the last minimal loss value and the subsequent values.\n\nLet’s analyze the behavior for patience=0:\n\nUntil epoch10 the loss is decreasing (starting with epoch0).\n\nThe loss in epoch11 increases; since patience=0, we are decreasing the lr. The current min value is 10980 from …'], 'answer_start': [1560]} |
How to free GPU memory? (and delete memory allocated variables) | I am using a VGG16 pretrained network, and the GPU memory usage (seen via nvidia-smi) increases every mini-batch (even when I delete all variables, or use torch.cuda.empty_cache() in the end of every iteration). It seems like some variables are stored in the GPU memory and cause the “out of memory” … | 0 | 2018-07-08T09:08:21.276Z | You could try to see the memory usage with the script posted in <a href="https://discuss.pytorch.org/t/how-pytorch-releases-variable-garbage/7277/2?u=ptrblck">this thread</a>.
Do you still run out of memory for batch_size=1 or are you currently testing batch_size=4?
Could you temporarily switch to an optimizer without tracking stats, e.g. optim.SGD? | 1 | 2018-07-09T11:12:03.048Z | https://discuss.pytorch.org/t/how-to-free-gpu-memory-and-delete-memory-allocated-variables/20856/14 | The patience is applied to the last minimal loss value and the subsequent values.
Let’s analyze the behavior for patience=0:
Until epoch10 the loss is decreasing (starting with epoch0).
The loss in epoch11 increases; since patience=0, we are decreasing the lr. The current min value is 10980 from … You could try to see the memory usage with the script posted in <a href="https://discuss.pytorch.org/t/how-pytorch-releases-variable-garbage/7277/2?u=ptrblck">this thread</a>.
Do you still run out of memory for batch_size=1 or are you currently testing batch_size=4?
Could you temporarily switch to an optimizer without tracking stats, e.g. optim.SGD? this was solved since Pytorch 1.10.0
“same” keyword is accepted as input for padding for conv2d | 1,089 | {'text': ['You could try to see the memory usage with the script posted in <a href="https://discuss.pytorch.org/t/how-pytorch-releases-variable-garbage/7277/2?u=ptrblck">this thread</a>.\n\nDo you still run out of memory for batch_size=1 or are you currently testing batch_size=4?\n\nCould you temporarily switch to an optimizer without tracking stats, e.g. optim.SGD?'], 'answer_start': [1089]} |
Same padding equivalent in Pytorch | I have a layer with an input of
torch.Size([64, 32, 100, 20])
In Keras I was using this
conv_first1 = Conv2D(32, (4, 1), padding="same")(conv_first1)
which lead to an output shape the same as an the input shape
If I use the below in pytorch I end up with a shape of 64,32,99,20
self.conv2 = nn.… | 1 | 2020-06-12T00:20:59.905Z | this was solved since Pytorch 1.10.0
“same” keyword is accepted as input for padding for conv2d | 6 | 2021-10-22T11:25:36.958Z | https://discuss.pytorch.org/t/same-padding-equivalent-in-pytorch/85121/8 | The patience is applied to the last minimal loss value and the subsequent values.
Let’s analyze the behavior for patience=0:
Until epoch10 the loss is decreasing (starting with epoch0).
The loss in epoch11 increases; since patience=0, we are decreasing the lr. The current min value is 10980 from … You could try to see the memory usage with the script posted in <a href="https://discuss.pytorch.org/t/how-pytorch-releases-variable-garbage/7277/2?u=ptrblck">this thread</a>.
Do you still run out of memory for batch_size=1 or are you currently testing batch_size=4?
Could you temporarily switch to an optimizer without tracking stats, e.g. optim.SGD? this was solved since Pytorch 1.10.0
“same” keyword is accepted as input for padding for conv2d | 663 | {'text': ['this was solved since Pytorch 1.10.0\n\n“same” keyword is accepted as input for padding for conv2d'], 'answer_start': [663]} |
How to Concatenate layers in PyTorch similar to tf.keras.layers.Concatenate | I’m trying to implement the following network in pytorch. I’m not sure if the method I used to combine layers is correct. In given network instead of convnet I’ve used pretrained VGG16 model.
<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/9/9e3b57848dd1067f412d26f0e452fe45c10a99a4.png" data-download-href="https://discuss.pytorch.org/uploads/default/9e3b57848dd1067f412d26f0e452fe45c10a99a4" title="deeprank.PNG">[deeprank]</a>
model = models.vgg16(pretrained=True)
new_classifier = nn.Sequential(*list(model.classifier.… | 2 | 2019-01-04T03:02:28.096Z | Thanks for the code.
It looks like to padding of your second max pooling layer is wrong, since you are using the same argument in Keras.
Try this definition self.maxpool2 = nn.MaxPool2d(7,2,padding=3) and your output will be [batch_size, 96, 4, 4] for both branches. | 4 | 2019-01-10T18:12:01.535Z | https://discuss.pytorch.org/t/how-to-concatenate-layers-in-pytorch-similar-to-tf-keras-layers-concatenate/33736/4 | Thanks for the code.
It looks like to padding of your second max pooling layer is wrong, since you are using the same argument in Keras.
Try this definition self.maxpool2 = nn.MaxPool2d(7,2,padding=3) and your output will be [batch_size, 96, 4, 4] for both branches. I implemented NN, KNN and KMeans on a project I am working on only using PyTorch. You can find the implementation here with an example: <a href="https://gist.github.com/JosueCom/7e89afc7f30761022d7747a501260fe3" class="inline-onebox" rel="noopener nofollow ugc">Nearest Neighbor, K Nearest Neighbor and K Means (NN, KNN, KMeans) only using PyTorch · GitHub</a>
>>> import torch as th
>>> from clustering import KNN
>>> data = th.… The binaries ship with their own CUDA, cudnn, etc. so that you don’t need to install these libs locally, if you are fine with the provided versions.
Could you uninstall PyTorch in your conda environment and reinstall it (with cudatoolkit=10.1)?
If you want to use e.g. CUDA10.2, you would have to i… | 1,518 | {'text': ['Thanks for the code.\n\nIt looks like to padding of your second max pooling layer is wrong, since you are using the same argument in Keras.\n\nTry this definition self.maxpool2 = nn.MaxPool2d(7,2,padding=3) and your output will be [batch_size, 96, 4, 4] for both branches.'], 'answer_start': [1518]} |
K nearest neighbor in pytorch | Hi, I have tensor size [12936x4098] and after computing a similarity using F.cosine_similarity, get a tensor of size 12936. For a given point, how can I get the k-nearest neighbor?
Using clustering methods defined in sklearn or scipy is very slow and required copy tensor from GPU to CPU.
Thank you … | 1 | 2019-10-31T16:06:52.016Z | I implemented NN, KNN and KMeans on a project I am working on only using PyTorch. You can find the implementation here with an example: <a href="https://gist.github.com/JosueCom/7e89afc7f30761022d7747a501260fe3" class="inline-onebox" rel="noopener nofollow ugc">Nearest Neighbor, K Nearest Neighbor and K Means (NN, KNN, KMeans) only using PyTorch · GitHub</a>
>>> import torch as th
>>> from clustering import KNN
>>> data = th.… | 1 | 2021-07-04T03:53:53.044Z | https://discuss.pytorch.org/t/k-nearest-neighbor-in-pytorch/59695/11 | Thanks for the code.
It looks like to padding of your second max pooling layer is wrong, since you are using the same argument in Keras.
Try this definition self.maxpool2 = nn.MaxPool2d(7,2,padding=3) and your output will be [batch_size, 96, 4, 4] for both branches. I implemented NN, KNN and KMeans on a project I am working on only using PyTorch. You can find the implementation here with an example: <a href="https://gist.github.com/JosueCom/7e89afc7f30761022d7747a501260fe3" class="inline-onebox" rel="noopener nofollow ugc">Nearest Neighbor, K Nearest Neighbor and K Means (NN, KNN, KMeans) only using PyTorch · GitHub</a>
>>> import torch as th
>>> from clustering import KNN
>>> data = th.… The binaries ship with their own CUDA, cudnn, etc. so that you don’t need to install these libs locally, if you are fine with the provided versions.
Could you uninstall PyTorch in your conda environment and reinstall it (with cudatoolkit=10.1)?
If you want to use e.g. CUDA10.2, you would have to i… | 1,028 | {'text': ['I implemented NN, KNN and KMeans on a project I am working on only using PyTorch. You can find the implementation here with an example: <a href="https://gist.github.com/JosueCom/7e89afc7f30761022d7747a501260fe3" class="inline-onebox" rel="noopener nofollow ugc">Nearest Neighbor, K Nearest Neighbor and K Means (NN, KNN, KMeans) only using PyTorch · GitHub</a>\n\n>>> import torch as th\n\n>>> from clustering import KNN\n\n>>> data = th.…'], 'answer_start': [1028]} |
Pytorch for cuda 10.2 | Hi all,
this is my first post and I am new to AI and RL… please apologize
I use ubuntu 18.04.1
I follow installation guide from <a href="https://pytorch.org/get-started/locally/#linux-verification" rel="nofollow noopener">pytorch</a> and followed cuda instalation guide <a href="https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1604&target_type=deblocal" rel="nofollow noopener">nvidia</a>
I notice that I cannot choose cuda 10.2 but that nvidia download site only offer cuda 10.2.
And after following abov… | 1 | 2020-01-01T16:13:15.954Z | The binaries ship with their own CUDA, cudnn, etc. so that you don’t need to install these libs locally, if you are fine with the provided versions.
Could you uninstall PyTorch in your conda environment and reinstall it (with cudatoolkit=10.1)?
If you want to use e.g. CUDA10.2, you would have to i… | 5 | 2020-01-01T19:32:25.490Z | https://discuss.pytorch.org/t/pytorch-for-cuda-10-2/65524/2 | Thanks for the code.
It looks like to padding of your second max pooling layer is wrong, since you are using the same argument in Keras.
Try this definition self.maxpool2 = nn.MaxPool2d(7,2,padding=3) and your output will be [batch_size, 96, 4, 4] for both branches. I implemented NN, KNN and KMeans on a project I am working on only using PyTorch. You can find the implementation here with an example: <a href="https://gist.github.com/JosueCom/7e89afc7f30761022d7747a501260fe3" class="inline-onebox" rel="noopener nofollow ugc">Nearest Neighbor, K Nearest Neighbor and K Means (NN, KNN, KMeans) only using PyTorch · GitHub</a>
>>> import torch as th
>>> from clustering import KNN
>>> data = th.… The binaries ship with their own CUDA, cudnn, etc. so that you don’t need to install these libs locally, if you are fine with the provided versions.
Could you uninstall PyTorch in your conda environment and reinstall it (with cudatoolkit=10.1)?
If you want to use e.g. CUDA10.2, you would have to i… | 737 | {'text': ['The binaries ship with their own CUDA, cudnn, etc. so that you don’t need to install these libs locally, if you are fine with the provided versions.\n\nCould you uninstall PyTorch in your conda environment and reinstall it (with cudatoolkit=10.1)?\n\nIf you want to use e.g. CUDA10.2, you would have to i…'], 'answer_start': [737]} |
Linear layer input neurons number calculation after conv2d | I will be really thankful to those who can explain me this. I know the formula for calculation but after some iterations, I haven’t got to the answer yet.
The formula for output neuron:
Output = ((I-K+2P)/S + 1), where
I - a size of input neuron,
K - kernel size,
P - padding,
S - stride.
Inpu… | 1 | 2018-11-03T12:21:38.185Z | Your input shape seems to be a bit wrong, as it looks like the channels are in the last dimension.
In PyTorch, image data is expected to have the shape [batch_size, channel, height, width].
Based on your shape, I guess 36 is the batch_size, while 3 seems to be the number channels.
However, as you… | 11 | 2018-11-03T13:16:32.543Z | https://discuss.pytorch.org/t/linear-layer-input-neurons-number-calculation-after-conv2d/28659/2 | Your input shape seems to be a bit wrong, as it looks like the channels are in the last dimension.
In PyTorch, image data is expected to have the shape [batch_size, channel, height, width].
Based on your shape, I guess 36 is the batch_size, while 3 seems to be the number channels.
However, as you… As the warning explains, you should call sdcheduler.step() after optimizer.step() was called (starting with PyTorch >= 1.1.0).
In your current code you are calling scheduler.step() directly in the first lines of the train_one_epoch method. Move it after the optimizer.step() method and the warning s… The code for the self-attention layer :
import torch.nn as nn
class SelfAttention(nn.Module):
""" Self attention Layer"""
def __init__(self,in_dim,activation):
super(SelfAttention,self).__init__()
self.chanel_in = in_dim
self.activation = activation
… | 2,090 | {'text': ['Your input shape seems to be a bit wrong, as it looks like the channels are in the last dimension.\n\nIn PyTorch, image data is expected to have the shape [batch_size, channel, height, width].\n\nBased on your shape, I guess 36 is the batch_size, while 3 seems to be the number channels.\n\nHowever, as you…'], 'answer_start': [2090]} |
UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()' | Hi I got this error can anyone help me please?
UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping … | 2 | 2020-07-07T21:57:49.718Z | As the warning explains, you should call sdcheduler.step() after optimizer.step() was called (starting with PyTorch >= 1.1.0).
In your current code you are calling scheduler.step() directly in the first lines of the train_one_epoch method. Move it after the optimizer.step() method and the warning s… | 4 | 2020-07-09T02:10:59.512Z | https://discuss.pytorch.org/t/userwarning-detected-call-of-lr-scheduler-step-before-optimizer-step-in-pytorch-1-1-0-and-later-you-should-call-them-in-the-opposite-order-optimizer-step-before-lr-scheduler-step/88295/2 | Your input shape seems to be a bit wrong, as it looks like the channels are in the last dimension.
In PyTorch, image data is expected to have the shape [batch_size, channel, height, width].
Based on your shape, I guess 36 is the batch_size, while 3 seems to be the number channels.
However, as you… As the warning explains, you should call sdcheduler.step() after optimizer.step() was called (starting with PyTorch >= 1.1.0).
In your current code you are calling scheduler.step() directly in the first lines of the train_one_epoch method. Move it after the optimizer.step() method and the warning s… The code for the self-attention layer :
import torch.nn as nn
class SelfAttention(nn.Module):
""" Self attention Layer"""
def __init__(self,in_dim,activation):
super(SelfAttention,self).__init__()
self.chanel_in = in_dim
self.activation = activation
… | 1,354 | {'text': ['As the warning explains, you should call sdcheduler.step() after optimizer.step() was called (starting with PyTorch >= 1.1.0).\n\nIn your current code you are calling scheduler.step() directly in the first lines of the train_one_epoch method. Move it after the optimizer.step() method and the warning s…'], 'answer_start': [1354]} |
Attention in image classification | Hi all,
I recently started reading up on attention in the context of computer vision. In my research, I found a number of ways attention is applied for various CV tasks. However, it is still unclear to me as to what’s really happening.
When I say attention, I mean a mechanism that will focus on th… | 3 | 2020-05-07T09:54:43.979Z | The code for the self-attention layer :
import torch.nn as nn
class SelfAttention(nn.Module):
""" Self attention Layer"""
def __init__(self,in_dim,activation):
super(SelfAttention,self).__init__()
self.chanel_in = in_dim
self.activation = activation
… | 8 | 2020-05-07T12:26:06.707Z | https://discuss.pytorch.org/t/attention-in-image-classification/80147/3 | Your input shape seems to be a bit wrong, as it looks like the channels are in the last dimension.
In PyTorch, image data is expected to have the shape [batch_size, channel, height, width].
Based on your shape, I guess 36 is the batch_size, while 3 seems to be the number channels.
However, as you… As the warning explains, you should call sdcheduler.step() after optimizer.step() was called (starting with PyTorch >= 1.1.0).
In your current code you are calling scheduler.step() directly in the first lines of the train_one_epoch method. Move it after the optimizer.step() method and the warning s… The code for the self-attention layer :
import torch.nn as nn
class SelfAttention(nn.Module):
""" Self attention Layer"""
def __init__(self,in_dim,activation):
super(SelfAttention,self).__init__()
self.chanel_in = in_dim
self.activation = activation
… | 621 | {'text': ['The code for the self-attention layer :\n\nimport torch.nn as nn\n\nclass SelfAttention(nn.Module):\n\n""" Self attention Layer"""\n\ndef __init__(self,in_dim,activation):\n\nsuper(SelfAttention,self).__init__()\n\nself.chanel_in = in_dim\n\nself.activation = activation\n\n…'], 'answer_start': [621]} |
Extracting and using features from a pretrained model | I see a related topic regarding my question <a href="https://discuss.pytorch.org/t/how-to-extract-features-of-an-image-from-a-trained-model/119">here</a>, but i could not find my answer there so i ask it here.
lets say im using the pretrained vgg and i want to extract the features from some specific layers.
Here is what i should do:
# Load the Vgg:
vgg16 = models.vgg16(pretrained=True)
# cut the par… | 1 | 2018-07-05T01:56:15.733Z | You can still using the pretrained weights, here’s some code:
import torch
import torch.utils.model_zoo as model_zoo
from torchvision.models.vgg import VGG, make_layers, cfg, vgg16
class MyVgg(VGG):
def __init__(self):
super().__init__(make_layers(cfg['D']))
def forward(self, x):… | 4 | 2018-07-05T07:14:38.542Z | https://discuss.pytorch.org/t/extracting-and-using-features-from-a-pretrained-model/20723/5 | You can still using the pretrained weights, here’s some code:
import torch
import torch.utils.model_zoo as model_zoo
from torchvision.models.vgg import VGG, make_layers, cfg, vgg16
class MyVgg(VGG):
def __init__(self):
super().__init__(make_layers(cfg['D']))
def forward(self, x):… This should work for what you want, all wrapped up as the activation function ‘non_sat_relu’. The backward leak is set to 0.1, but it could of course be whatever you want.
import torch
class NSReLU(torch.autograd.Function):
@staticmethod
def forward(self,x):
self.neg = x < 0
… I used torch::tensor(ArrayRef<float>) successfully.
I’m not 100% certain whether that copies already. You might have to take care of ownership or clone the output while the array ref is still alive, I think you have to do that when you use form_blob, too.
Best regards
Thomas | 1,834 | {'text': ['You can still using the pretrained weights, here’s some code:\n\nimport torch\n\nimport torch.utils.model_zoo as model_zoo\n\nfrom torchvision.models.vgg import VGG, make_layers, cfg, vgg16\n\nclass MyVgg(VGG):\n\ndef __init__(self):\n\nsuper().__init__(make_layers(cfg['D']))\n\ndef forward(self, x):…'], 'answer_start': [1834]} |
Relu with leaky derivative | My understanding is that for classification tasks there is the intuition that:
(1) relu activation functions encourage sparsity, which is good (for generalization?) but that
(2) a leaky relu solves the gradient saturation problem, which relu has, at the cost of sparsity.
Is it possible, in PyTorc… | 3 | 2018-12-22T14:14:26.551Z | This should work for what you want, all wrapped up as the activation function ‘non_sat_relu’. The backward leak is set to 0.1, but it could of course be whatever you want.
import torch
class NSReLU(torch.autograd.Function):
@staticmethod
def forward(self,x):
self.neg = x < 0
… | 3 | 2019-09-28T13:27:10.042Z | https://discuss.pytorch.org/t/relu-with-leaky-derivative/32818/5 | You can still using the pretrained weights, here’s some code:
import torch
import torch.utils.model_zoo as model_zoo
from torchvision.models.vgg import VGG, make_layers, cfg, vgg16
class MyVgg(VGG):
def __init__(self):
super().__init__(make_layers(cfg['D']))
def forward(self, x):… This should work for what you want, all wrapped up as the activation function ‘non_sat_relu’. The backward leak is set to 0.1, but it could of course be whatever you want.
import torch
class NSReLU(torch.autograd.Function):
@staticmethod
def forward(self,x):
self.neg = x < 0
… I used torch::tensor(ArrayRef<float>) successfully.
I’m not 100% certain whether that copies already. You might have to take care of ownership or clone the output while the array ref is still alive, I think you have to do that when you use form_blob, too.
Best regards
Thomas | 1,221 | {'text': ['This should work for what you want, all wrapped up as the activation function ‘non_sat_relu’. The backward leak is set to 0.1, but it could of course be whatever you want.\n\nimport torch\n\nclass NSReLU(torch.autograd.Function):\n\n@staticmethod\n\ndef forward(self,x):\n\nself.neg = x < 0\n\n…'], 'answer_start': [1221]} |
Can I initialize tensor from std::vector in libtorch? | Hi,
I wonder if I can initialize torch::Tensor from std::vector like this
#include <torch/torch.h>
#include <vector>
int main()
{
std::vector<T> initializer;
...
torch::Tensor tensor = torch::from_blob(initializer);
} | 3 | 2018-12-28T06:23:30.127Z | I used torch::tensor(ArrayRef<float>) successfully.
I’m not 100% certain whether that copies already. You might have to take care of ownership or clone the output while the array ref is still alive, I think you have to do that when you use form_blob, too.
Best regards
Thomas | 3 | 2018-12-28T13:24:37.311Z | https://discuss.pytorch.org/t/can-i-initialize-tensor-from-std-vector-in-libtorch/33236/2 | You can still using the pretrained weights, here’s some code:
import torch
import torch.utils.model_zoo as model_zoo
from torchvision.models.vgg import VGG, make_layers, cfg, vgg16
class MyVgg(VGG):
def __init__(self):
super().__init__(make_layers(cfg['D']))
def forward(self, x):… This should work for what you want, all wrapped up as the activation function ‘non_sat_relu’. The backward leak is set to 0.1, but it could of course be whatever you want.
import torch
class NSReLU(torch.autograd.Function):
@staticmethod
def forward(self,x):
self.neg = x < 0
… I used torch::tensor(ArrayRef<float>) successfully.
I’m not 100% certain whether that copies already. You might have to take care of ownership or clone the output while the array ref is still alive, I think you have to do that when you use form_blob, too.
Best regards
Thomas | 598 | {'text': ['I used torch::tensor(ArrayRef<float>) successfully.\n\nI’m not 100% certain whether that copies already. You might have to take care of ownership or clone the output while the array ref is still alive, I think you have to do that when you use form_blob, too.\n\nBest regards\n\nThomas'], 'answer_start': [598]} |
Confused about torch.max() and gradient | x = Variable(torch.randn(1,3),requires_grad=True)
z,_ = torch.max(x,1)
z.backward()
print(x.grad)
Variable containing:
1 0 0
[torch.FloatTensor of size 1x3]
I understand the max operation is a not differentiable operation. So why can I still get the gradient here? | 1 | 2018-03-03T09:58:21.273Z | max simply selects the greatest value and ignores the others, so max is the identity operation for that one element. Therefore the gradient can flow backwards through it for just that one element. | 9 | 2018-03-03T10:03:35.073Z | https://discuss.pytorch.org/t/confused-about-torch-max-and-gradient/14283/2 | max simply selects the greatest value and ignores the others, so max is the identity operation for that one element. Therefore the gradient can flow backwards through it for just that one element. For the second use case you could use Tensor.unfold:
S = 128 # channel dim
W = 256 # width
H = 256 # height
batch_size = 10
x = torch.randn(batch_size, S, W, H)
size = 64 # patch size
stride = 64 # patch stride
patches = x.unfold(1, size, stride).unfold(2, size, stride).unfold(3, size, stride)
pr… I think a good solution can be found here: <a href="https://discuss.pytorch.org/t/changing-transforms-after-creating-a-dataset/64929/7" class="inline-onebox">Changing transforms after creating a dataset - #7 by Brando_Miranda</a>
train_dataset = MyDataset(train_transform)
val_dataset = MyDataset(val_transform)
train_indices, val_indices = sklearn.model_selection.train_test_split(indices)
train_dataset = torch.utils… | 1,764 | {'text': ['max simply selects the greatest value and ignores the others, so max is the identity operation for that one element. Therefore the gradient can flow backwards through it for just that one element.'], 'answer_start': [1764]} |
How to extract smaller image patches (3D)? | Best way to extract smaller image patches(3D)?
First step, I would like to read 10 three-dimentional data with size of (H, W, S) and then downsample these data to (H/2, W/2, S/2).
Second step, I want to design a sliding window to extract patches with size of (64, 64, 64) from the above images.
Ar… | 2 | 2018-04-23T14:34:31.294Z | For the second use case you could use Tensor.unfold:
S = 128 # channel dim
W = 256 # width
H = 256 # height
batch_size = 10
x = torch.randn(batch_size, S, W, H)
size = 64 # patch size
stride = 64 # patch stride
patches = x.unfold(1, size, stride).unfold(2, size, stride).unfold(3, size, stride)
pr… | 13 | 2018-04-24T09:05:18.608Z | https://discuss.pytorch.org/t/how-to-extract-smaller-image-patches-3d/16837/4 | max simply selects the greatest value and ignores the others, so max is the identity operation for that one element. Therefore the gradient can flow backwards through it for just that one element. For the second use case you could use Tensor.unfold:
S = 128 # channel dim
W = 256 # width
H = 256 # height
batch_size = 10
x = torch.randn(batch_size, S, W, H)
size = 64 # patch size
stride = 64 # patch stride
patches = x.unfold(1, size, stride).unfold(2, size, stride).unfold(3, size, stride)
pr… I think a good solution can be found here: <a href="https://discuss.pytorch.org/t/changing-transforms-after-creating-a-dataset/64929/7" class="inline-onebox">Changing transforms after creating a dataset - #7 by Brando_Miranda</a>
train_dataset = MyDataset(train_transform)
val_dataset = MyDataset(val_transform)
train_indices, val_indices = sklearn.model_selection.train_test_split(indices)
train_dataset = torch.utils… | 1,079 | {'text': ['For the second use case you could use Tensor.unfold:\n\nS = 128 # channel dim\n\nW = 256 # width\n\nH = 256 # height\n\nbatch_size = 10\n\nx = torch.randn(batch_size, S, W, H)\n\nsize = 64 # patch size\n\nstride = 64 # patch stride\n\npatches = x.unfold(1, size, stride).unfold(2, size, stride).unfold(3, size, stride)\n\npr…'], 'answer_start': [1079]} |
Apply different Transform (Data Augmentation) to Train and Validation | My dataset folder is prepared as Train Folder and Test Folder. When I conduct experiments, I further split my Train Folder data into Train and Validation.
However, transform is applied before my split and they are the same for both my Train and Validation. My question is how to apply a different tr… | 3 | 2019-12-10T06:20:49.648Z | I think a good solution can be found here: <a href="https://discuss.pytorch.org/t/changing-transforms-after-creating-a-dataset/64929/7" class="inline-onebox">Changing transforms after creating a dataset - #7 by Brando_Miranda</a>
train_dataset = MyDataset(train_transform)
val_dataset = MyDataset(val_transform)
train_indices, val_indices = sklearn.model_selection.train_test_split(indices)
train_dataset = torch.utils… | 0 | 2021-12-17T21:51:44.733Z | https://discuss.pytorch.org/t/apply-different-transform-data-augmentation-to-train-and-validation/63580/12 | max simply selects the greatest value and ignores the others, so max is the identity operation for that one element. Therefore the gradient can flow backwards through it for just that one element. For the second use case you could use Tensor.unfold:
S = 128 # channel dim
W = 256 # width
H = 256 # height
batch_size = 10
x = torch.randn(batch_size, S, W, H)
size = 64 # patch size
stride = 64 # patch stride
patches = x.unfold(1, size, stride).unfold(2, size, stride).unfold(3, size, stride)
pr… I think a good solution can be found here: <a href="https://discuss.pytorch.org/t/changing-transforms-after-creating-a-dataset/64929/7" class="inline-onebox">Changing transforms after creating a dataset - #7 by Brando_Miranda</a>
train_dataset = MyDataset(train_transform)
val_dataset = MyDataset(val_transform)
train_indices, val_indices = sklearn.model_selection.train_test_split(indices)
train_dataset = torch.utils… | 512 | {'text': ['I think a good solution can be found here: <a href="https://discuss.pytorch.org/t/changing-transforms-after-creating-a-dataset/64929/7" class="inline-onebox">Changing transforms after creating a dataset - #7 by Brando_Miranda</a>\n\ntrain_dataset = MyDataset(train_transform)\n\nval_dataset = MyDataset(val_transform)\n\ntrain_indices, val_indices = sklearn.model_selection.train_test_split(indices)\n\ntrain_dataset = torch.utils…'], 'answer_start': [512]} |
Reset model weights | I would like to know, if there is a way to reset weights for a PyTorch model.
Here is my code:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=5)
self.conv2 = nn.Conv2d(16, 32, kernel_size=5)
self.c… | 1 | 2018-06-04T15:05:39.913Z | Sure! You just have to define your init function:
def weights_init(m):
if isinstance(m, nn.Conv2d):
torch.nn.init.xavier_uniform(m.weight.data)
And call it on the model with:
model.apply(weight_init)
If you want to have the same random weights for each initialization, you would need … | 14 | 2018-06-04T15:18:42.868Z | https://discuss.pytorch.org/t/reset-model-weights/19180/4 | Sure! You just have to define your init function:
def weights_init(m):
if isinstance(m, nn.Conv2d):
torch.nn.init.xavier_uniform(m.weight.data)
And call it on the model with:
model.apply(weight_init)
If you want to have the same random weights for each initialization, you would need … Since ImageFolder will lazily load the data in its __getitem__ method, you could create three different dataset instances for training, validation, and test and could pass the appropriate transformation to them.
You could then create the sample indices via torch.arange(nb_samples) (or the numpy equ… Would you like to change the weights manually?
If so, you could wrap the code in a torch.no_grad() guard:
with torch.no_grad():
model.fc.weight[0, 0] = 1.
to prevent Autograd from tracking these changes. | 1,884 | {'text': ['Sure! You just have to define your init function:\n\ndef weights_init(m):\n\nif isinstance(m, nn.Conv2d):\n\ntorch.nn.init.xavier_uniform(m.weight.data)\n\nAnd call it on the model with:\n\nmodel.apply(weight_init)\n\nIf you want to have the same random weights for each initialization, you would need …'], 'answer_start': [1884]} |
Using ImageFolder, random_split with multiple transforms | Folks, I downloaded the flower’s dataset (images of 5 classes) which I load with ImageFolder. I then split the entire dataset using torch.utils.data.random_split into a training, validation and a testing set.
The issue I am finding is that I have two different transforms I want to apply. One for … | 3 | 2020-05-05T22:20:56.199Z | Since ImageFolder will lazily load the data in its __getitem__ method, you could create three different dataset instances for training, validation, and test and could pass the appropriate transformation to them.
You could then create the sample indices via torch.arange(nb_samples) (or the numpy equ… | 2 | 2020-05-06T07:00:40.513Z | https://discuss.pytorch.org/t/using-imagefolder-random-split-with-multiple-transforms/79899/2 | Sure! You just have to define your init function:
def weights_init(m):
if isinstance(m, nn.Conv2d):
torch.nn.init.xavier_uniform(m.weight.data)
And call it on the model with:
model.apply(weight_init)
If you want to have the same random weights for each initialization, you would need … Since ImageFolder will lazily load the data in its __getitem__ method, you could create three different dataset instances for training, validation, and test and could pass the appropriate transformation to them.
You could then create the sample indices via torch.arange(nb_samples) (or the numpy equ… Would you like to change the weights manually?
If so, you could wrap the code in a torch.no_grad() guard:
with torch.no_grad():
model.fc.weight[0, 0] = 1.
to prevent Autograd from tracking these changes. | 1,241 | {'text': ['Since ImageFolder will lazily load the data in its __getitem__ method, you could create three different dataset instances for training, validation, and test and could pass the appropriate transformation to them.\n\nYou could then create the sample indices via torch.arange(nb_samples) (or the numpy equ…'], 'answer_start': [1241]} |
How to change the weights of a pytorch model? | I need to change the weights at specific layers of ResNet-152 during training.
I think there has been a similar question sometime earlier, but I cannot find it! | 1 | 2019-03-30T19:43:30.522Z | Would you like to change the weights manually?
If so, you could wrap the code in a torch.no_grad() guard:
with torch.no_grad():
model.fc.weight[0, 0] = 1.
to prevent Autograd from tracking these changes. | 6 | 2019-03-30T20:28:00.100Z | https://discuss.pytorch.org/t/how-to-change-the-weights-of-a-pytorch-model/41279/2 | Sure! You just have to define your init function:
def weights_init(m):
if isinstance(m, nn.Conv2d):
torch.nn.init.xavier_uniform(m.weight.data)
And call it on the model with:
model.apply(weight_init)
If you want to have the same random weights for each initialization, you would need … Since ImageFolder will lazily load the data in its __getitem__ method, you could create three different dataset instances for training, validation, and test and could pass the appropriate transformation to them.
You could then create the sample indices via torch.arange(nb_samples) (or the numpy equ… Would you like to change the weights manually?
If so, you could wrap the code in a torch.no_grad() guard:
with torch.no_grad():
model.fc.weight[0, 0] = 1.
to prevent Autograd from tracking these changes. | 608 | {'text': ['Would you like to change the weights manually?\n\nIf so, you could wrap the code in a torch.no_grad() guard:\n\nwith torch.no_grad():\n\nmodel.fc.weight[0, 0] = 1.\n\nto prevent Autograd from tracking these changes.'], 'answer_start': [608]} |
Dataloader stucks | Hi, developers:
I have the large training dataset which is packed in a zip file.
In train.py, I load it once and then pass it into dataloader, here is the code:
import zipfile
# load zip dataset
zf = zipfile.ZipFile(zip_path)
# read the images of zip via dataloader
train_loader = torch.utils.data… | 3 | 2018-02-27T10:02:50.310Z | I had the same issue when opening a tarfile. A quick fix is to open a zipfile handle once at the start of __getitem__.
class MyDataSet(Dataset):
def __init__(self, filename):
self.zip_handle = None
self.fname = filename
def __getitem__(self, x):
if self.zip_handle is… | 4 | 2018-03-07T11:59:23.645Z | https://discuss.pytorch.org/t/dataloader-stucks/14087/5 | I had the same issue when opening a tarfile. A quick fix is to open a zipfile handle once at the start of __getitem__.
class MyDataSet(Dataset):
def __init__(self, filename):
self.zip_handle = None
self.fname = filename
def __getitem__(self, x):
if self.zip_handle is… we have that flag set because we build with gcc 4.9.x, which only has the old ABI.
In GCC 5.1, the ABI for std::string was changed, and binaries compiling with gcc >= 5.1 are not ABI-compatible with binaries build with gcc < 5.1 (like pytorch) unless you set that flag. You could use the <a href="https://github.com/pytorch/examples/blob/master/imagenet/main.py#L327" rel="nofollow noopener">ImageNet example</a> or the following manual approach:
for epoch in range(num_epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
running_loss =+ loss.item() * images.size(0)
loss_values.append(running_loss / len(train_dataset))
plt.plot(loss_va… | 1,630 | {'text': ['I had the same issue when opening a tarfile. A quick fix is to open a zipfile handle once at the start of __getitem__.\n\nclass MyDataSet(Dataset):\n\ndef __init__(self, filename):\n\nself.zip_handle = None\n\nself.fname = filename\n\ndef __getitem__(self, x):\n\nif self.zip_handle is…'], 'answer_start': [1630]} |
Issues linking with libtorch (C++11 ABI?) | I’m using clang 6.0 and I’m getting a lot of issues while linking to c10 library:
undefined reference to `c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string&lt;char, std::char_traits&lt;char&gt;, std::allocator&lt;char&gt; &gt; const&amp;)'
All undefined references seems to be relat… | 2 | 2018-11-13T16:09:41.271Z | we have that flag set because we build with gcc 4.9.x, which only has the old ABI.
In GCC 5.1, the ABI for std::string was changed, and binaries compiling with gcc >= 5.1 are not ABI-compatible with binaries build with gcc < 5.1 (like pytorch) unless you set that flag. | 2 | 2018-11-16T21:44:36.029Z | https://discuss.pytorch.org/t/issues-linking-with-libtorch-c-11-abi/29510/7 | I had the same issue when opening a tarfile. A quick fix is to open a zipfile handle once at the start of __getitem__.
class MyDataSet(Dataset):
def __init__(self, filename):
self.zip_handle = None
self.fname = filename
def __getitem__(self, x):
if self.zip_handle is… we have that flag set because we build with gcc 4.9.x, which only has the old ABI.
In GCC 5.1, the ABI for std::string was changed, and binaries compiling with gcc >= 5.1 are not ABI-compatible with binaries build with gcc < 5.1 (like pytorch) unless you set that flag. You could use the <a href="https://github.com/pytorch/examples/blob/master/imagenet/main.py#L327" rel="nofollow noopener">ImageNet example</a> or the following manual approach:
for epoch in range(num_epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
running_loss =+ loss.item() * images.size(0)
loss_values.append(running_loss / len(train_dataset))
plt.plot(loss_va… | 1,097 | {'text': ['we have that flag set because we build with gcc 4.9.x, which only has the old ABI.\n\nIn GCC 5.1, the ABI for std::string was changed, and binaries compiling with gcc >= 5.1 are not ABI-compatible with binaries build with gcc < 5.1 (like pytorch) unless you set that flag.'], 'answer_start': [1097]} |
Plotting loss curve | I am trying to plot a loss curve by each epoch, but I’m not sure how to do that. I can do it for 1 epoch using the following method:
def train(model, num_epoch):
for epoch in range(num_epoch):
running_loss = 0.0
loss_values = []
for i, data in enumerate(trainloader, 0):… | 1 | 2019-04-15T14:07:59.406Z | You could use the <a href="https://github.com/pytorch/examples/blob/master/imagenet/main.py#L327" rel="nofollow noopener">ImageNet example</a> or the following manual approach:
for epoch in range(num_epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
running_loss =+ loss.item() * images.size(0)
loss_values.append(running_loss / len(train_dataset))
plt.plot(loss_va… | 7 | 2019-04-16T09:21:06.862Z | https://discuss.pytorch.org/t/plotting-loss-curve/42632/4 | I had the same issue when opening a tarfile. A quick fix is to open a zipfile handle once at the start of __getitem__.
class MyDataSet(Dataset):
def __init__(self, filename):
self.zip_handle = None
self.fname = filename
def __getitem__(self, x):
if self.zip_handle is… we have that flag set because we build with gcc 4.9.x, which only has the old ABI.
In GCC 5.1, the ABI for std::string was changed, and binaries compiling with gcc >= 5.1 are not ABI-compatible with binaries build with gcc < 5.1 (like pytorch) unless you set that flag. You could use the <a href="https://github.com/pytorch/examples/blob/master/imagenet/main.py#L327" rel="nofollow noopener">ImageNet example</a> or the following manual approach:
for epoch in range(num_epochs):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
running_loss =+ loss.item() * images.size(0)
loss_values.append(running_loss / len(train_dataset))
plt.plot(loss_va… | 559 | {'text': ['You could use the <a href="https://github.com/pytorch/examples/blob/master/imagenet/main.py#L327" rel="nofollow noopener">ImageNet example</a> or the following manual approach:\n\nfor epoch in range(num_epochs):\n\nrunning_loss = 0.0\n\nfor i, data in enumerate(trainloader, 0):\n\nrunning_loss =+ loss.item() * images.size(0)\n\nloss_values.append(running_loss / len(train_dataset))\n\nplt.plot(loss_va…'], 'answer_start': [559]} |
Zero grad on single parameter | Hi,
I found this this code to zero the gradients on single parameter:
a.grad.zero_()
But it is not working: AttributeError: 'NoneType' object has no attribute 'zero_'
I previously declared:
a = torch.tensor(-1., requires_grad=True)
a = nn.Parameter(a) | 2 | 2019-03-17T20:47:33.311Z | Hi,
Before you call .backward(), the gradient of each tensor which requires_grad=True are all None.
Like the case you posted, you could calculate a.grad firstly and then zero_() its grad.
In opt.zero_grad() it declare explicity:
def zero_grad(self):
r"""Clears the gradients of all op… | 4 | 2019-03-18T02:24:27.664Z | https://discuss.pytorch.org/t/zero-grad-on-single-parameter/40098/3 | Hi,
Before you call .backward(), the gradient of each tensor which requires_grad=True are all None.
Like the case you posted, you could calculate a.grad firstly and then zero_() its grad.
In opt.zero_grad() it declare explicity:
def zero_grad(self):
r"""Clears the gradients of all op… Thanks for the code.
I’m not sure, if you would need this workaround and why you are replacing the self.model.conv1 with a conv layer accepting a single input channel.
The default resnet18 model already accepts RGB images, so you could just remove the self.model.conv1 line of code as well as self.… Yes, the order should be preserved as shown in this simple example using TensorDatasets:
datasets = []
for i in range(3):
datasets.append(TensorDataset(torch.arange(i*10, (i+1)*10)))
dataset = ConcatDataset(datasets)
loader = DataLoader(
dataset,
shuffle=False,
num_workers=0,
b… | 1,916 | {'text': ['Hi,\n\nBefore you call .backward(), the gradient of each tensor which requires_grad=True are all None.\n\nLike the case you posted, you could calculate a.grad firstly and then zero_() its grad.\n\nIn opt.zero_grad() it declare explicity:\n\ndef zero_grad(self):\n\nr"""Clears the gradients of all op…'], 'answer_start': [1916]} |
RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[3, 1, 224, 224] to have 3 channels, but got 1 channels instead | I see a couple of such posts in forum but I have hardtime generalizing it to my own problem. Here’s the error:
torch.Size([3, 1, 224, 224])
Traceback (most recent call last):
File "test_loocv.py", line 245, in <module>
output = model_ft(test_data)
File "/scratch/sjn-p3/anaconda/anaconda3/li… | 1 | 2018-11-21T06:33:35.228Z | Thanks for the code.
I’m not sure, if you would need this workaround and why you are replacing the self.model.conv1 with a conv layer accepting a single input channel.
The default resnet18 model already accepts RGB images, so you could just remove the self.model.conv1 line of code as well as self.… | 1 | 2020-10-06T22:58:42.423Z | https://discuss.pytorch.org/t/runtimeerror-given-groups-1-weight-of-size-64-3-7-7-expected-input-3-1-224-224-to-have-3-channels-but-got-1-channels-instead/30153/29 | Hi,
Before you call .backward(), the gradient of each tensor which requires_grad=True are all None.
Like the case you posted, you could calculate a.grad firstly and then zero_() its grad.
In opt.zero_grad() it declare explicity:
def zero_grad(self):
r"""Clears the gradients of all op… Thanks for the code.
I’m not sure, if you would need this workaround and why you are replacing the self.model.conv1 with a conv layer accepting a single input channel.
The default resnet18 model already accepts RGB images, so you could just remove the self.model.conv1 line of code as well as self.… Yes, the order should be preserved as shown in this simple example using TensorDatasets:
datasets = []
for i in range(3):
datasets.append(TensorDataset(torch.arange(i*10, (i+1)*10)))
dataset = ConcatDataset(datasets)
loader = DataLoader(
dataset,
shuffle=False,
num_workers=0,
b… | 1,271 | {'text': ['Thanks for the code.\n\nI’m not sure, if you would need this workaround and why you are replacing the self.model.conv1 with a conv layer accepting a single input channel.\n\nThe default resnet18 model already accepts RGB images, so you could just remove the self.model.conv1 line of code as well as self.…'], 'answer_start': [1271]} |
How does ConcatDataset work? | Hello. This is my CustomDataSetClass:
class CustomDataSet(Dataset):
def __init__(self, main_dir, transform):
self.main_dir = main_dir
self.transform = transform
all_imgs = os.listdir(main_dir)
self.total_imgs = natsort.natsorted(all_imgs)
for file_name in… | 1 | 2019-11-05T15:21:46.197Z | Yes, the order should be preserved as shown in this simple example using TensorDatasets:
datasets = []
for i in range(3):
datasets.append(TensorDataset(torch.arange(i*10, (i+1)*10)))
dataset = ConcatDataset(datasets)
loader = DataLoader(
dataset,
shuffle=False,
num_workers=0,
b… | 8 | 2019-11-06T04:53:22.325Z | https://discuss.pytorch.org/t/how-does-concatdataset-work/60083/2 | Hi,
Before you call .backward(), the gradient of each tensor which requires_grad=True are all None.
Like the case you posted, you could calculate a.grad firstly and then zero_() its grad.
In opt.zero_grad() it declare explicity:
def zero_grad(self):
r"""Clears the gradients of all op… Thanks for the code.
I’m not sure, if you would need this workaround and why you are replacing the self.model.conv1 with a conv layer accepting a single input channel.
The default resnet18 model already accepts RGB images, so you could just remove the self.model.conv1 line of code as well as self.… Yes, the order should be preserved as shown in this simple example using TensorDatasets:
datasets = []
for i in range(3):
datasets.append(TensorDataset(torch.arange(i*10, (i+1)*10)))
dataset = ConcatDataset(datasets)
loader = DataLoader(
dataset,
shuffle=False,
num_workers=0,
b… | 622 | {'text': ['Yes, the order should be preserved as shown in this simple example using TensorDatasets:\n\ndatasets = []\n\nfor i in range(3):\n\ndatasets.append(TensorDataset(torch.arange(i*10, (i+1)*10)))\n\ndataset = ConcatDataset(datasets)\n\nloader = DataLoader(\n\ndataset,\n\nshuffle=False,\n\nnum_workers=0,\n\nb…'], 'answer_start': [622]} |
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation | Hello, I’m getting following error:
NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please c… | 2 | 2022-01-18T13:55:30.101Z | <a class="mention" href="/u/ptrblck">@ptrblck</a>, many thanks for your help.
Confirming that the problem resolved by:
1- creating new conda env
2- installing pytorch 1.9.0 with cuda 11.1 wheel | 0 | 2022-03-18T18:25:22.222Z | https://discuss.pytorch.org/t/nvidia-geforce-rtx-3090-with-cuda-capability-sm-86-is-not-compatible-with-the-current-pytorch-installation/141940/9 | <a class="mention" href="/u/ptrblck">@ptrblck</a>, many thanks for your help.
Confirming that the problem resolved by:
1- creating new conda env
2- installing pytorch 1.9.0 with cuda 11.1 wheel [image] josmi9966:
But why would I want to e.g. choose the cuda 8.0
over the cuda 9.0 version there?
Might be useful if you have an older card that doesn’t support CUDA 9.0 via its drivers, yet.
could I simply always install the version with most recent cuda (9.1 currently) and be happy?
… I have recently answered some other post with a similar question. But basically, the collate_fn receives a list of tuples if your __getitem__ function from a Dataset subclass returns a tuple, or just a normal list if your Dataset subclass returns only one element. Its main objective is to create you… | 1,834 | {'text': ['<a class="mention" href="/u/ptrblck">@ptrblck</a>, many thanks for your help.\n\nConfirming that the problem resolved by:\n\n1- creating new conda env\n\n2- installing pytorch 1.9.0 with cuda 11.1 wheel'], 'answer_start': [1834]} |
Please help me understand installation for CUDA on linux | I was not really able to find anything on this.
There are pre-compiled PyTorch packages for different versions of Python, pip or conda, and different versions of CUDA or CPU-only on the web site.
Is it true that PyTorch does not need any CUDA or cuDNN or other library installed on the target syste… | 1 | 2018-03-01T19:31:21.199Z | [image] josmi9966:
But why would I want to e.g. choose the cuda 8.0
over the cuda 9.0 version there?
Might be useful if you have an older card that doesn’t support CUDA 9.0 via its drivers, yet.
could I simply always install the version with most recent cuda (9.1 currently) and be happy?
… | 4 | 2018-03-02T17:55:49.236Z | https://discuss.pytorch.org/t/please-help-me-understand-installation-for-cuda-on-linux/14217/6 | <a class="mention" href="/u/ptrblck">@ptrblck</a>, many thanks for your help.
Confirming that the problem resolved by:
1- creating new conda env
2- installing pytorch 1.9.0 with cuda 11.1 wheel [image] josmi9966:
But why would I want to e.g. choose the cuda 8.0
over the cuda 9.0 version there?
Might be useful if you have an older card that doesn’t support CUDA 9.0 via its drivers, yet.
could I simply always install the version with most recent cuda (9.1 currently) and be happy?
… I have recently answered some other post with a similar question. But basically, the collate_fn receives a list of tuples if your __getitem__ function from a Dataset subclass returns a tuple, or just a normal list if your Dataset subclass returns only one element. Its main objective is to create you… | 1,115 | {'text': ['[image] josmi9966:\n\nBut why would I want to e.g. choose the cuda 8.0\n\nover the cuda 9.0 version there?\n\nMight be useful if you have an older card that doesn’t support CUDA 9.0 via its drivers, yet.\n\ncould I simply always install the version with most recent cuda (9.1 currently) and be happy?\n\n…'], 'answer_start': [1115]} |
How to use collate_fn() | Hi,
I am not sure with what collate_fn does.
is there any example that helps understanding what it does? | 15 | 2018-10-13T13:16:11.658Z | I have recently answered some other post with a similar question. But basically, the collate_fn receives a list of tuples if your __getitem__ function from a Dataset subclass returns a tuple, or just a normal list if your Dataset subclass returns only one element. Its main objective is to create you… | 26 | 2019-07-18T01:06:28.816Z | https://discuss.pytorch.org/t/how-to-use-collate-fn/27181/4 | <a class="mention" href="/u/ptrblck">@ptrblck</a>, many thanks for your help.
Confirming that the problem resolved by:
1- creating new conda env
2- installing pytorch 1.9.0 with cuda 11.1 wheel [image] josmi9966:
But why would I want to e.g. choose the cuda 8.0
over the cuda 9.0 version there?
Might be useful if you have an older card that doesn’t support CUDA 9.0 via its drivers, yet.
could I simply always install the version with most recent cuda (9.1 currently) and be happy?
… I have recently answered some other post with a similar question. But basically, the collate_fn receives a list of tuples if your __getitem__ function from a Dataset subclass returns a tuple, or just a normal list if your Dataset subclass returns only one element. Its main objective is to create you… | 501 | {'text': ['I have recently answered some other post with a similar question. But basically, the collate_fn receives a list of tuples if your __getitem__ function from a Dataset subclass returns a tuple, or just a normal list if your Dataset subclass returns only one element. Its main objective is to create you…'], 'answer_start': [501]} |
How to switch to older version of pytorch? | I have a problem with version 0.4 and what to go back to version 0.3.
How I can do that?
I checked the follwoing link (<a href="https://pytorch.org/previous-versions/" rel="nofollow noopener">https://pytorch.org/previous-versions/</a>) but when i run the command:
torch-0.3.1-cp36-cp36m-linux_x86_64.whl is not a supported wheel on this platform.
it gives me this error:
… | 1 | 2018-06-13T17:48:40.877Z | Could you post the error?
What does this command output?
conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=9.0 -c pytorch | 1 | 2020-03-06T01:38:19.366Z | https://discuss.pytorch.org/t/how-to-switch-to-older-version-of-pytorch/19656/12 | Could you post the error?
What does this command output?
conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=9.0 -c pytorch weird. can you try:
conda update -y conda
conda install mkl=2018
conda install pytorch=0.3.0 -c pytorch I actually have narrowed the issue down.
it seems to be happening when I tried to use more than 1 GPU.
def prepare_device(n_gpu_use):
"""
setup GPU device if available, move model into configured device
"""
n_gpu = torch.cuda.device_count()
if n_gpu_use > 0 and n_gpu == 0:
… | 1,618 | {'text': ['Could you post the error?\n\nWhat does this command output?\n\nconda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=9.0 -c pytorch'], 'answer_start': [1618]} |
Updating to latest or recent version using package manager | after updating using the command
conda update pytorch
or uninstalling pytorch and reinstalling with
conda install pytorch torchvision -c pytorch
CosineSimilarity disappears from <a href="http://distance.py" rel="nofollow noopener">distance.py</a>.
however it appears that CosineSimilarity remains in the master branch of the source code. Anyone els… | 2 | 2018-01-05T21:33:58.182Z | weird. can you try:
conda update -y conda
conda install mkl=2018
conda install pytorch=0.3.0 -c pytorch | 2 | 2018-01-06T18:19:47.618Z | https://discuss.pytorch.org/t/updating-to-latest-or-recent-version-using-package-manager/11925/6 | Could you post the error?
What does this command output?
conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=9.0 -c pytorch weird. can you try:
conda update -y conda
conda install mkl=2018
conda install pytorch=0.3.0 -c pytorch I actually have narrowed the issue down.
it seems to be happening when I tried to use more than 1 GPU.
def prepare_device(n_gpu_use):
"""
setup GPU device if available, move model into configured device
"""
n_gpu = torch.cuda.device_count()
if n_gpu_use > 0 and n_gpu == 0:
… | 943 | {'text': ['weird. can you try:\n\nconda update -y conda\n\nconda install mkl=2018\n\nconda install pytorch=0.3.0 -c pytorch'], 'answer_start': [943]} |
Model param.grad is None, how to debug? | I have a code that accumulates grad of each layer after .backward() call on loss.
It was working but after some change, I am seeing a model where all parameter with grad None
I guess since grad is None, no training is happening.
When does it usually happen? What should I check to find out the cau… | 2 | 2019-08-06T06:26:57.996Z | I actually have narrowed the issue down.
it seems to be happening when I tried to use more than 1 GPU.
def prepare_device(n_gpu_use):
"""
setup GPU device if available, move model into configured device
"""
n_gpu = torch.cuda.device_count()
if n_gpu_use > 0 and n_gpu == 0:
… | 0 | 2019-08-06T21:33:32.398Z | https://discuss.pytorch.org/t/model-param-grad-is-none-how-to-debug/52634/10 | Could you post the error?
What does this command output?
conda install pytorch==1.1.0 torchvision==0.3.0 cudatoolkit=9.0 -c pytorch weird. can you try:
conda update -y conda
conda install mkl=2018
conda install pytorch=0.3.0 -c pytorch I actually have narrowed the issue down.
it seems to be happening when I tried to use more than 1 GPU.
def prepare_device(n_gpu_use):
"""
setup GPU device if available, move model into configured device
"""
n_gpu = torch.cuda.device_count()
if n_gpu_use > 0 and n_gpu == 0:
… | 241 | {'text': ['I actually have narrowed the issue down.\n\nit seems to be happening when I tried to use more than 1 GPU.\n\ndef prepare_device(n_gpu_use):\n\n"""\n\nsetup GPU device if available, move model into configured device\n\n"""\n\nn_gpu = torch.cuda.device_count()\n\nif n_gpu_use > 0 and n_gpu == 0:\n\n…'], 'answer_start': [241]} |
Training gets slow down by each batch slowly | Hi there,
I have a pre-trained model, and I added an actor-critic method into the model and trained only on the rl-related parameter (I fixed the parameters from pre-trained model). However, I noticed that the training speed gets slow down slowly at each batch and memory usage on GPU also increases… | 2 | 2017-06-30T01:22:45.587Z | Turns out I had declared the Variable tensors holding a batch of features and labels outside the loop over the 20000 batches, then filled them up for each batch. Moving the declarations of those tensors inside the loop (which I thought would be less efficient) solved my slowdown problem. Now the f… | 6 | 2017-11-06T19:35:07.992Z | https://discuss.pytorch.org/t/training-gets-slow-down-by-each-batch-slowly/4460/7 | Turns out I had declared the Variable tensors holding a batch of features and labels outside the loop over the 20000 batches, then filled them up for each batch. Moving the declarations of those tensors inside the loop (which I thought would be less efficient) solved my slowdown problem. Now the f… If you are using nn.BCELoss, the output should use torch.sigmoid as the activation function. Alternatively, you won’t use any activation function and pass raw logits to nn.BCEWithLogitsLoss. If you use nn.CrossEntropyLoss for the multi-class segmentation, you should also pass the raw logits withou… The issue is created by the inplace unsqueeze_ call on action_value, but is raised in the tanh.
If you use action_value = action_value.unsqueeze(-1) instead, your code should work. | 1,128 | {'text': ['Turns out I had declared the Variable tensors holding a batch of features and labels outside the loop over the 20000 batches, then filled them up for each batch. Moving the declarations of those tensors inside the loop (which I thought would be less efficient) solved my slowdown problem. Now the f…'], 'answer_start': [1128]} |
Multiclass Segmentation | Hi, is there an example for creating a custom dataset and training for multiclass segmentation using U-Net? I find many examples for binary segmentation but yet to find something for multiclass segmentation. Thank you! | 2 | 2019-08-22T14:57:22.324Z | If you are using nn.BCELoss, the output should use torch.sigmoid as the activation function. Alternatively, you won’t use any activation function and pass raw logits to nn.BCEWithLogitsLoss. If you use nn.CrossEntropyLoss for the multi-class segmentation, you should also pass the raw logits withou… | 5 | 2019-08-22T17:31:37.688Z | https://discuss.pytorch.org/t/multiclass-segmentation/54065/4 | Turns out I had declared the Variable tensors holding a batch of features and labels outside the loop over the 20000 batches, then filled them up for each batch. Moving the declarations of those tensors inside the loop (which I thought would be less efficient) solved my slowdown problem. Now the f… If you are using nn.BCELoss, the output should use torch.sigmoid as the activation function. Alternatively, you won’t use any activation function and pass raw logits to nn.BCEWithLogitsLoss. If you use nn.CrossEntropyLoss for the multi-class segmentation, you should also pass the raw logits withou… The issue is created by the inplace unsqueeze_ call on action_value, but is raised in the tanh.
If you use action_value = action_value.unsqueeze(-1) instead, your code should work. | 873 | {'text': ['If you are using nn.BCELoss, the output should use torch.sigmoid as the activation function. Alternatively, you won’t use any activation function and pass raw logits to nn.BCEWithLogitsLoss. If you use nn.CrossEntropyLoss for the multi-class segmentation, you should also pass the raw logits withou…'], 'answer_start': [873]} |
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [3, 1]], which is output 0 of TanhBackward, is at version 1; expected version 0 instead | Hey,
I’m getting this error but i don’t understand why it has a problem with ‘action_value = torch.tanh(self.action_values(x))’ when I use relu or hardtanh there is no problem.
class NAF(nn.Module):
def __init__(self, state_size, action_size,layer_size, n_step, seed):
super(NAF, self).… | 2 | 2020-07-01T18:11:26.156Z | The issue is created by the inplace unsqueeze_ call on action_value, but is raised in the tanh.
If you use action_value = action_value.unsqueeze(-1) instead, your code should work. | 5 | 2020-07-02T10:53:53.736Z | https://discuss.pytorch.org/t/runtimeerror-one-of-the-variables-needed-for-gradient-computation-has-been-modified-by-an-inplace-operation-torch-floattensor-3-1-which-is-output-0-of-tanhbackward-is-at-version-1-expected-version-0-instead/87630/2 | Turns out I had declared the Variable tensors holding a batch of features and labels outside the loop over the 20000 batches, then filled them up for each batch. Moving the declarations of those tensors inside the loop (which I thought would be less efficient) solved my slowdown problem. Now the f… If you are using nn.BCELoss, the output should use torch.sigmoid as the activation function. Alternatively, you won’t use any activation function and pass raw logits to nn.BCEWithLogitsLoss. If you use nn.CrossEntropyLoss for the multi-class segmentation, you should also pass the raw logits withou… The issue is created by the inplace unsqueeze_ call on action_value, but is raised in the tanh.
If you use action_value = action_value.unsqueeze(-1) instead, your code should work. | 616 | {'text': ['The issue is created by the inplace unsqueeze_ call on action_value, but is raised in the tanh.\n\nIf you use action_value = action_value.unsqueeze(-1) instead, your code should work.'], 'answer_start': [616]} |
PyTorch Gradients | Normally when we’re doing backprop we would do the following:
loss.backward() # This calculates the gradients
optimizer.step() # This updates the net
However, what if I wish to accumulate the gradients? Meaning I want to run various loss.backward() multiple times first and accumulate the gradie… | 1 | 2017-03-05T08:24:54.172Z | I think a simpler way to do this would be:
num_epoch = 10
real_batchsize = 100 # I want to update weight every `real_batchsize`
for epoch in range(num_epoch):
total_loss = 0
for batch_idx, (data, target) in enumerate(train_loader):
data, target = Variable(data.cuda()), Variable(tar… | 4 | 2017-11-06T11:17:39.131Z | https://discuss.pytorch.org/t/pytorch-gradients/884/10 | I think a simpler way to do this would be:
num_epoch = 10
real_batchsize = 100 # I want to update weight every `real_batchsize`
for epoch in range(num_epoch):
total_loss = 0
for batch_idx, (data, target) in enumerate(train_loader):
data, target = Variable(data.cuda()), Variable(tar… No, it is not supported on Windows. The reason is that multiprocessing lib doesn’t have it implemented on Windows. There are some alternatives like dill that can pickle more objects. Thanks.
Are you running some custom CUDA extensions? No, I am using <a href="https://colab.research.google.com/" rel="nofollow noopener">https://colab.research.google.com/</a>
did you build PyTorch from source? No, I am using its modules and functions.
which GPU are you using? I am using <a href="https://colab.research.google.com/" rel="nofollow noopener">https://colab.research.google.com/</a> and I do not know what is it.
I found what ca… | 1,594 | {'text': ['I think a simpler way to do this would be:\n\nnum_epoch = 10\n\nreal_batchsize = 100 # I want to update weight every `real_batchsize`\n\nfor epoch in range(num_epoch):\n\ntotal_loss = 0\n\nfor batch_idx, (data, target) in enumerate(train_loader):\n\ndata, target = Variable(data.cuda()), Variable(tar…'], 'answer_start': [1594]} |
Can't pickle local object 'DataLoader.__init__.<locals>.<lambda>' | Hi all,
I hope everybody reading this is having a great day.
So I have a problem with torchvision.transforms.Lambda() function when used with python function: enumerate. I am using it to make my uni-channeled image into multi-channeled tensor. It works fine and produce data loader instance for tor… | 1 | 2018-12-11T12:03:49.778Z | No, it is not supported on Windows. The reason is that multiprocessing lib doesn’t have it implemented on Windows. There are some alternatives like dill that can pickle more objects. | 3 | 2018-12-11T14:06:46.529Z | https://discuss.pytorch.org/t/cant-pickle-local-object-dataloader-init-locals-lambda/31857/14 | I think a simpler way to do this would be:
num_epoch = 10
real_batchsize = 100 # I want to update weight every `real_batchsize`
for epoch in range(num_epoch):
total_loss = 0
for batch_idx, (data, target) in enumerate(train_loader):
data, target = Variable(data.cuda()), Variable(tar… No, it is not supported on Windows. The reason is that multiprocessing lib doesn’t have it implemented on Windows. There are some alternatives like dill that can pickle more objects. Thanks.
Are you running some custom CUDA extensions? No, I am using <a href="https://colab.research.google.com/" rel="nofollow noopener">https://colab.research.google.com/</a>
did you build PyTorch from source? No, I am using its modules and functions.
which GPU are you using? I am using <a href="https://colab.research.google.com/" rel="nofollow noopener">https://colab.research.google.com/</a> and I do not know what is it.
I found what ca… | 1,094 | {'text': ['No, it is not supported on Windows. The reason is that multiprocessing lib doesn’t have it implemented on Windows. There are some alternatives like dill that can pickle more objects.'], 'answer_start': [1094]} |
RuntimeError: cuda runtime error (100) : no CUDA-capable device is detected at ..\aten\src\THC\THCGeneral.cpp:50 | I was trying to run the extractive summarizer of the BERTSUM program(<a href="https://github.com/nlpyang/PreSumm/tree/master/src" rel="nofollow noopener">https://github.com/nlpyang/PreSumm/tree/master/src</a>) in test mode with the following command:
python train.py -task ext -mode test -batch_size 3000 -test_batch_size 500 -bert_data_path C:\Users\hp\Downloads\PreSumm-master\PreSumm… | 2 | 2020-01-14T08:20:45.138Z | Thanks.
Are you running some custom CUDA extensions? No, I am using <a href="https://colab.research.google.com/" rel="nofollow noopener">https://colab.research.google.com/</a>
did you build PyTorch from source? No, I am using its modules and functions.
which GPU are you using? I am using <a href="https://colab.research.google.com/" rel="nofollow noopener">https://colab.research.google.com/</a> and I do not know what is it.
I found what ca… | 2 | 2020-04-23T20:05:10.962Z | https://discuss.pytorch.org/t/runtimeerror-cuda-runtime-error-100-no-cuda-capable-device-is-detected-at-aten-src-thc-thcgeneral-cpp-50/66606/9 | I think a simpler way to do this would be:
num_epoch = 10
real_batchsize = 100 # I want to update weight every `real_batchsize`
for epoch in range(num_epoch):
total_loss = 0
for batch_idx, (data, target) in enumerate(train_loader):
data, target = Variable(data.cuda()), Variable(tar… No, it is not supported on Windows. The reason is that multiprocessing lib doesn’t have it implemented on Windows. There are some alternatives like dill that can pickle more objects. Thanks.
Are you running some custom CUDA extensions? No, I am using <a href="https://colab.research.google.com/" rel="nofollow noopener">https://colab.research.google.com/</a>
did you build PyTorch from source? No, I am using its modules and functions.
which GPU are you using? I am using <a href="https://colab.research.google.com/" rel="nofollow noopener">https://colab.research.google.com/</a> and I do not know what is it.
I found what ca… | 480 | {'text': ['Thanks.\n\nAre you running some custom CUDA extensions? No, I am using <a href="https://colab.research.google.com/" rel="nofollow noopener">https://colab.research.google.com/</a>\n\ndid you build PyTorch from source? No, I am using its modules and functions.\n\nwhich GPU are you using? I am using <a href="https://colab.research.google.com/" rel="nofollow noopener">https://colab.research.google.com/</a> and I do not know what is it.\n\nI found what ca…'], 'answer_start': [480]} |
How to split backward process wrt each layer of neural network? | Hi everyone,
I’m working on a project that requires me to have access to each step of backward propagation during the training process. Say I have a 10 layer fully connected neural net (input->fc1->fc2->…->fc10->output), and during the backward process I want something like output.backward()->fc10.… | 3 | 2017-09-08T21:52:17.511Z | here’s a more precise and fuller example. What you are doing in my example is to completely avoid autograd’s automatic backward computation and manually reverse-computing the backward graph.
For anyone coming here with a search, my solution is a hack, it is not good practice. it is given as an ill… | 1 | 2017-10-11T03:36:25.462Z | https://discuss.pytorch.org/t/how-to-split-backward-process-wrt-each-layer-of-neural-network/7190/10 | here’s a more precise and fuller example. What you are doing in my example is to completely avoid autograd’s automatic backward computation and manually reverse-computing the backward graph.
For anyone coming here with a search, my solution is a hack, it is not good practice. it is given as an ill… Your network is still on cpu. Add NN = NN.cuda(). Ah ok, thanks for the info.
It looks like a standard segmentation task.
I would suggest to use nn.CrossEntropyLoss for your use case.
Have a look at the following code snippet:
n_class = 10
preds = torch.randn(4, n_class, 24, 24)
labels = torch.empty(4, 24, 24, dtype=torch.long).random_(n_class)… | 1,868 | {'text': ['here’s a more precise and fuller example. What you are doing in my example is to completely avoid autograd’s automatic backward computation and manually reverse-computing the backward graph.\n\nFor anyone coming here with a search, my solution is a hack, it is not good practice. it is given as an ill…'], 'answer_start': [1868]} |
Type mismatch on model when using GPU | Hello I am writing a small pytorch example with a simple NN. The program runs fine if I declare
dtype = torch.FloatTensor
#dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU
The code currently runs great with the CPU option. However, as soon as I uncomment and switch to the GPU option,… | 1 | 2017-12-20T21:21:21.159Z | Your network is still on cpu. Add NN = NN.cuda(). | 11 | 2017-12-20T21:29:35.431Z | https://discuss.pytorch.org/t/type-mismatch-on-model-when-using-gpu/11409/2 | here’s a more precise and fuller example. What you are doing in my example is to completely avoid autograd’s automatic backward computation and manually reverse-computing the backward graph.
For anyone coming here with a search, my solution is a hack, it is not good practice. it is given as an ill… Your network is still on cpu. Add NN = NN.cuda(). Ah ok, thanks for the info.
It looks like a standard segmentation task.
I would suggest to use nn.CrossEntropyLoss for your use case.
Have a look at the following code snippet:
n_class = 10
preds = torch.randn(4, n_class, 24, 24)
labels = torch.empty(4, 24, 24, dtype=torch.long).random_(n_class)… | 1,243 | {'text': ['Your network is still on cpu. Add NN = NN.cuda().'], 'answer_start': [1243]} |
Multi-Class Cross Entropy Loss function implementation in PyTorch | I’m trying to implement a multi-class cross entropy loss function in pytorch, for a 10 class semantic segmentation problem. The shape of the predictions and labels are both [4, 10, 256, 256] where 4 is the batch size, 10 the number of channels, 256x256 the height and width of the images.
The follow… | 0 | 2018-06-02T01:24:55.841Z | Ah ok, thanks for the info.
It looks like a standard segmentation task.
I would suggest to use nn.CrossEntropyLoss for your use case.
Have a look at the following code snippet:
n_class = 10
preds = torch.randn(4, n_class, 24, 24)
labels = torch.empty(4, 24, 24, dtype=torch.long).random_(n_class)… | 3 | 2018-06-02T12:41:27.266Z | https://discuss.pytorch.org/t/multi-class-cross-entropy-loss-function-implementation-in-pytorch/19077/13 | here’s a more precise and fuller example. What you are doing in my example is to completely avoid autograd’s automatic backward computation and manually reverse-computing the backward graph.
For anyone coming here with a search, my solution is a hack, it is not good practice. it is given as an ill… Your network is still on cpu. Add NN = NN.cuda(). Ah ok, thanks for the info.
It looks like a standard segmentation task.
I would suggest to use nn.CrossEntropyLoss for your use case.
Have a look at the following code snippet:
n_class = 10
preds = torch.randn(4, n_class, 24, 24)
labels = torch.empty(4, 24, 24, dtype=torch.long).random_(n_class)… | 359 | {'text': ['Ah ok, thanks for the info.\n\nIt looks like a standard segmentation task.\n\nI would suggest to use nn.CrossEntropyLoss for your use case.\n\nHave a look at the following code snippet:\n\nn_class = 10\n\npreds = torch.randn(4, n_class, 24, 24)\n\nlabels = torch.empty(4, 24, 24, dtype=torch.long).random_(n_class)…'], 'answer_start': [359]} |
[resolved] Cuda Runtime Error(30) | When I run the code torch.cuda.is_available(), I meet the error as below:
THCudaCheck FAIL file=torch/csrc/cuda/Module.cpp line=109 error=30 : unknown error
Traceback (most recent call last):
File "trainer.py", line 13, in <module>
if torch.cuda.is_available():
File "/usr/local/lib/python2.… | 0 | 2017-03-16T12:50:12.345Z | The solution can be found <a href="https://github.com/tensorflow/tensorflow/issues/5777#issuecomment-301058363" rel="nofollow noopener">here</a>. Basically, run the following commands in the terminal:
sudo rmmod nvidia_uvm
sudo rmmod nvidia
sudo modprobe nvidia
sudo modprobe nvidia_uvm | 4 | 2019-03-07T10:01:21.284Z | https://discuss.pytorch.org/t/resolved-cuda-runtime-error-30/1116/18 | The solution can be found <a href="https://github.com/tensorflow/tensorflow/issues/5777#issuecomment-301058363" rel="nofollow noopener">here</a>. Basically, run the following commands in the terminal:
sudo rmmod nvidia_uvm
sudo rmmod nvidia
sudo modprobe nvidia
sudo modprobe nvidia_uvm Do you have cuDNN installed and enabled?
If so, could you check if your observation is same as the issue described here? <a href="https://github.com/pytorch/pytorch/issues/3665">https://github.com/pytorch/pytorch/issues/3665</a> Sure! specific model and all its parameters, including the optimizer, will al be together in the same file. So, you just need to load the corresponding file. So, if we determine variable epoch=10, then the filename as determined above will be
epoch = 10
PATH = 'train_valid_exp4-epoch{}.pth'.format(… | 1,338 | {'text': ['The solution can be found <a href="https://github.com/tensorflow/tensorflow/issues/5777#issuecomment-301058363" rel="nofollow noopener">here</a>. Basically, run the following commands in the terminal:\n\nsudo rmmod nvidia_uvm\n\nsudo rmmod nvidia\n\nsudo modprobe nvidia\n\nsudo modprobe nvidia_uvm'], 'answer_start': [1338]} |
Memory (RAM) usage keep going up every step | Hello, first of all I would like to say that i like PyTorch so far and eager to see what it do in the future.
I train a custom Module char-RNN because i want to save the last hidden state. but it seems that every step my memory (RAM) usage keep getting bigger and bigger. I don’t know where or what … | 2 | 2018-01-10T13:59:39.844Z | Do you have cuDNN installed and enabled?
If so, could you check if your observation is same as the issue described here? <a href="https://github.com/pytorch/pytorch/issues/3665">https://github.com/pytorch/pytorch/issues/3665</a> | 2 | 2018-01-10T18:06:28.857Z | https://discuss.pytorch.org/t/memory-ram-usage-keep-going-up-every-step/12109/4 | The solution can be found <a href="https://github.com/tensorflow/tensorflow/issues/5777#issuecomment-301058363" rel="nofollow noopener">here</a>. Basically, run the following commands in the terminal:
sudo rmmod nvidia_uvm
sudo rmmod nvidia
sudo modprobe nvidia
sudo modprobe nvidia_uvm Do you have cuDNN installed and enabled?
If so, could you check if your observation is same as the issue described here? <a href="https://github.com/pytorch/pytorch/issues/3665">https://github.com/pytorch/pytorch/issues/3665</a> Sure! specific model and all its parameters, including the optimizer, will al be together in the same file. So, you just need to load the corresponding file. So, if we determine variable epoch=10, then the filename as determined above will be
epoch = 10
PATH = 'train_valid_exp4-epoch{}.pth'.format(… | 960 | {'text': ['Do you have cuDNN installed and enabled?\n\nIf so, could you check if your observation is same as the issue described here? <a href="https://github.com/pytorch/pytorch/issues/3665">https://github.com/pytorch/pytorch/issues/3665</a>'], 'answer_start': [960]} |
How resume the saved trained model at specific epoch | I did save the model with 150 epoch by this way torch.save(model.state_dict(), 'train_valid_exp4.pth')
I can load the model and test it by model.load_state_dict(torch.load('train_valid_exp4.pth')) which I assume returning me a model in last epoch.
My model seems is performing better at epoch 40, s… | 1 | 2019-01-28T20:42:24.956Z | Sure! specific model and all its parameters, including the optimizer, will al be together in the same file. So, you just need to load the corresponding file. So, if we determine variable epoch=10, then the filename as determined above will be
epoch = 10
PATH = 'train_valid_exp4-epoch{}.pth'.format(… | 1 | 2019-01-29T02:11:08.478Z | https://discuss.pytorch.org/t/how-resume-the-saved-trained-model-at-specific-epoch/35823/10 | The solution can be found <a href="https://github.com/tensorflow/tensorflow/issues/5777#issuecomment-301058363" rel="nofollow noopener">here</a>. Basically, run the following commands in the terminal:
sudo rmmod nvidia_uvm
sudo rmmod nvidia
sudo modprobe nvidia
sudo modprobe nvidia_uvm Do you have cuDNN installed and enabled?
If so, could you check if your observation is same as the issue described here? <a href="https://github.com/pytorch/pytorch/issues/3665">https://github.com/pytorch/pytorch/issues/3665</a> Sure! specific model and all its parameters, including the optimizer, will al be together in the same file. So, you just need to load the corresponding file. So, if we determine variable epoch=10, then the filename as determined above will be
epoch = 10
PATH = 'train_valid_exp4-epoch{}.pth'.format(… | 521 | {'text': ['Sure! specific model and all its parameters, including the optimizer, will al be together in the same file. So, you just need to load the corresponding file. So, if we determine variable epoch=10, then the filename as determined above will be\n\nepoch = 10\n\nPATH = 'train_valid_exp4-epoch{}.pth'.format(…'], 'answer_start': [521]} |
Shuffle issue in DataLoader? How to get the same data shuffle results with fixed seed but different network? | The shuffle results of DataLoader changes according to different network architecture;
I set fixed random seed in the front as below:
torch.backends.cudnn.deterministic = True
random.seed(1)
torch.manual_seed(1)
torch.cuda.manual_seed(1)
np.random.seed(1)
and I can get same shuffle results every … | 1 | 2019-05-16T02:43:28.531Z | I think that you are initializing the network before the dataloader. For this reason, when you change the network size, the samples generated by the dataloader also change. Because, as you know all filters and bias need to be initialized normally using random methods. A change of the number of times… | 13 | 2019-05-17T09:10:15.282Z | https://discuss.pytorch.org/t/shuffle-issue-in-dataloader-how-to-get-the-same-data-shuffle-results-with-fixed-seed-but-different-network/45357/5 | I think that you are initializing the network before the dataloader. For this reason, when you change the network size, the samples generated by the dataloader also change. Because, as you know all filters and bias need to be initialized normally using random methods. A change of the number of times… Based on your weights, I assume you might have multiples of this distribution:
class_counts = torch.tensor([104, 642, 784])
If so, I’ve manipulated my example code to use your weights and data distribution to get approx. equally distributed batches:
# Create dummy data with class imbalance 99 to … I have one question. In the num_classes argument do we have to include Background also as a class. Because originally I have 4 classes. When I run with 5 classes (including background) it does not throw me any error. | 1,676 | {'text': ['I think that you are initializing the network before the dataloader. For this reason, when you change the network size, the samples generated by the dataloader also change. Because, as you know all filters and bias need to be initialized normally using random methods. A change of the number of times…'], 'answer_start': [1676]} |
Some problems with WeightedRandomSampler | Dear groupers,
I work on an unbalanced dataset. There are six class in my dataset. The first class has 568330 samples, the second class has 43000 samples, the third class has 34900, the fourth class has 20910, the fifth class has 14590, and the last class has 9712 class. I used WeighedRandomSampler… | 1 | 2018-08-16T07:34:59.586Z | Based on your weights, I assume you might have multiples of this distribution:
class_counts = torch.tensor([104, 642, 784])
If so, I’ve manipulated my example code to use your weights and data distribution to get approx. equally distributed batches:
# Create dummy data with class imbalance 99 to … | 1 | 2019-02-28T16:12:59.536Z | https://discuss.pytorch.org/t/some-problems-with-weightedrandomsampler/23242/34 | I think that you are initializing the network before the dataloader. For this reason, when you change the network size, the samples generated by the dataloader also change. Because, as you know all filters and bias need to be initialized normally using random methods. A change of the number of times… Based on your weights, I assume you might have multiples of this distribution:
class_counts = torch.tensor([104, 642, 784])
If so, I’ve manipulated my example code to use your weights and data distribution to get approx. equally distributed batches:
# Create dummy data with class imbalance 99 to … I have one question. In the num_classes argument do we have to include Background also as a class. Because originally I have 4 classes. When I run with 5 classes (including background) it does not throw me any error. | 1,147 | {'text': ['Based on your weights, I assume you might have multiples of this distribution:\n\nclass_counts = torch.tensor([104, 642, 784])\n\nIf so, I’ve manipulated my example code to use your weights and data distribution to get approx. equally distributed batches:\n\n# Create dummy data with class imbalance 99 to …'], 'answer_start': [1147]} |
CUDA error: an illegal memory access was encountered | Hi, all. I am getting a weird illegal memory access error whenever I try to train a FasterRCNN model with an image size of (1280,840,3) and a batch size of 3. The GPU used is Tesla K80 with CUDA 10.1 on an Ubuntu OS. I am Pytorch 1.5 and torchvision 0.6 Given below is the code snippet.
def from_n… | 2 | 2020-05-19T07:06:17.671Z | I have one question. In the num_classes argument do we have to include Background also as a class. Because originally I have 4 classes. When I run with 5 classes (including background) it does not throw me any error. | 0 | 2020-05-22T09:48:36.977Z | https://discuss.pytorch.org/t/cuda-error-an-illegal-memory-access-was-encountered/81940/8 | I think that you are initializing the network before the dataloader. For this reason, when you change the network size, the samples generated by the dataloader also change. Because, as you know all filters and bias need to be initialized normally using random methods. A change of the number of times… Based on your weights, I assume you might have multiples of this distribution:
class_counts = torch.tensor([104, 642, 784])
If so, I’ve manipulated my example code to use your weights and data distribution to get approx. equally distributed batches:
# Create dummy data with class imbalance 99 to … I have one question. In the num_classes argument do we have to include Background also as a class. Because originally I have 4 classes. When I run with 5 classes (including background) it does not throw me any error. | 618 | {'text': ['I have one question. In the num_classes argument do we have to include Background also as a class. Because originally I have 4 classes. When I run with 5 classes (including background) it does not throw me any error.'], 'answer_start': [618]} |
CNN results negative when using log_softmax and nll loss | Hi all, I’m using the nll_loss function in conjunction with log_softmax as advised in the documentation when creating a CNN. However, when I test new images, I get negative numbers rather than 0-1 limited results. This is really strange given the bound nature of the softmax function and I was wonder… | 2 | 2018-04-23T14:36:31.060Z | Since you are using the logarithm on softmax, you will get numbers in [-inf, 0], since log(0)=-inf and log(1)=0.
You could get the probabilities back by using torch.exp(output). | 10 | 2018-04-23T14:41:53.857Z | https://discuss.pytorch.org/t/cnn-results-negative-when-using-log-softmax-and-nll-loss/16839/2 | Since you are using the logarithm on softmax, you will get numbers in [-inf, 0], since log(0)=-inf and log(1)=0.
You could get the probabilities back by using torch.exp(output). If your targets contain the class indices already, you should remove the channel dimension:
target = target.squeeze(1) You could get the indices for all class1 labels and then index the labels and data:
dataset = datasets.MNIST(root='./data')
idx = dataset.train_labels==1
dataset.train_labels = dataset.train_labels[idx]
dataset.train_data = dataset.train_data[idx]
However, your model won’t learn anything as you ju… | 1,674 | {'text': ['Since you are using the logarithm on softmax, you will get numbers in [-inf, 0], since log(0)=-inf and log(1)=0.\n\nYou could get the probabilities back by using torch.exp(output).'], 'answer_start': [1674]} |
Only batches of spatial targets supported (non-empty 3D tensors) but got targets of size: : [1, 1, 256, 256] | Hi all! I’m trying to find objects in medical images, which are grayscale, and I only have two class: background and the lesion.
I’m scaling my images to 256*256, and I’ve mapped the masks png color numbers as suggested by <a class="mention" href="/u/ptrblck">@ptrblck</a> in multiple topics. However, I’m still getting the error in the ti… | 1 | 2019-06-27T15:10:28.569Z | If your targets contain the class indices already, you should remove the channel dimension:
target = target.squeeze(1) | 4 | 2019-06-27T15:11:57.471Z | https://discuss.pytorch.org/t/only-batches-of-spatial-targets-supported-non-empty-3d-tensors-but-got-targets-of-size-1-1-256-256/49134/2 | Since you are using the logarithm on softmax, you will get numbers in [-inf, 0], since log(0)=-inf and log(1)=0.
You could get the probabilities back by using torch.exp(output). If your targets contain the class indices already, you should remove the channel dimension:
target = target.squeeze(1) You could get the indices for all class1 labels and then index the labels and data:
dataset = datasets.MNIST(root='./data')
idx = dataset.train_labels==1
dataset.train_labels = dataset.train_labels[idx]
dataset.train_data = dataset.train_data[idx]
However, your model won’t learn anything as you ju… | 1,016 | {'text': ['If your targets contain the class indices already, you should remove the channel dimension:\n\ntarget = target.squeeze(1)'], 'answer_start': [1016]} |
How to use one class of number in MNIST | Hello I’m study the MNIST and want to train a model with only number “1”, but I don’t know how to extract the “1” class out of the total dataset… I only know the code:
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
Thanks | 0 | 2018-10-01T02:01:21.068Z | You could get the indices for all class1 labels and then index the labels and data:
dataset = datasets.MNIST(root='./data')
idx = dataset.train_labels==1
dataset.train_labels = dataset.train_labels[idx]
dataset.train_data = dataset.train_data[idx]
However, your model won’t learn anything as you ju… | 12 | 2018-10-01T04:03:53.889Z | https://discuss.pytorch.org/t/how-to-use-one-class-of-number-in-mnist/26276/2 | Since you are using the logarithm on softmax, you will get numbers in [-inf, 0], since log(0)=-inf and log(1)=0.
You could get the probabilities back by using torch.exp(output). If your targets contain the class indices already, you should remove the channel dimension:
target = target.squeeze(1) You could get the indices for all class1 labels and then index the labels and data:
dataset = datasets.MNIST(root='./data')
idx = dataset.train_labels==1
dataset.train_labels = dataset.train_labels[idx]
dataset.train_data = dataset.train_data[idx]
However, your model won’t learn anything as you ju… | 299 | {'text': ['You could get the indices for all class1 labels and then index the labels and data:\n\ndataset = datasets.MNIST(root='./data')\n\nidx = dataset.train_labels==1\n\ndataset.train_labels = dataset.train_labels[idx]\n\ndataset.train_data = dataset.train_data[idx]\n\nHowever, your model won’t learn anything as you ju…'], 'answer_start': [299]} |
Using torch.Tensor over multiprocessing.Queue + Process fails | Hi,
Context
I have a simple algorithm that distributes a number of tasks across a list of Process, then the results of the workers is sent back using a Queue. I was previously using numpy to do this kind of job.
Problem
To be more consistent with my code, I decided to use only torch tensors, unfor… | 1 | 2017-05-10T16:47:50.761Z | Your background process needs to be alive when the main process reads the tensor.
Here’s a small modification to your example:
import multiprocessing as mp
import torch
done = mp.Event()
def extractor_worker(done_queue):
done_queue.put(torch.Tensor(10,10))
done_queue.put(None)
done.w… | 7 | 2017-05-10T17:31:06.266Z | https://discuss.pytorch.org/t/using-torch-tensor-over-multiprocessing-queue-process-fails/2847/2 | Your background process needs to be alive when the main process reads the tensor.
Here’s a small modification to your example:
import multiprocessing as mp
import torch
done = mp.Event()
def extractor_worker(done_queue):
done_queue.put(torch.Tensor(10,10))
done_queue.put(None)
done.w… Hi, in your bash, your command should like:
CUDA_VISIBLE_DEVICES=0,3 python train.py
and in your train.py, the gpus config should be set [0,1]. Thanks for the explanation.
In this case, would you want to use the 10x10 pixels as the vector to calculate the cosine similarity?
Each channel would therefore hold a 100-dimensional vector pointing somewhere and you could calculate the similarity between the channels.
a = torch.randn(1, 2, 10, 1… | 1,236 | {'text': ['Your background process needs to be alive when the main process reads the tensor.\n\nHere’s a small modification to your example:\n\nimport multiprocessing as mp\n\nimport torch\n\ndone = mp.Event()\n\ndef extractor_worker(done_queue):\n\ndone_queue.put(torch.Tensor(10,10))\n\ndone_queue.put(None)\n\ndone.w…'], 'answer_start': [1236]} |
How to solve the problem of `RuntimeError: all tensors must be on devices[0]` | code:
for i, (input, target) in enumerate(test_loader):
target = target.cuda(async=True) # in test loader, pin_memory = True
input_var = torch.autograd.Variable(input, volatile=False)
# (Batch_Size, 10L, 3L, 32L, 224L, 224L)
b, s, c, t, h, w = input_var.size()
# view in (Batch_Size * 10L, 3L,… | 1 | 2018-03-20T08:57:37.932Z | Hi, in your bash, your command should like:
CUDA_VISIBLE_DEVICES=0,3 python train.py
and in your train.py, the gpus config should be set [0,1]. | 5 | 2018-04-27T04:48:27.635Z | https://discuss.pytorch.org/t/how-to-solve-the-problem-of-runtimeerror-all-tensors-must-be-on-devices-0/15198/13 | Your background process needs to be alive when the main process reads the tensor.
Here’s a small modification to your example:
import multiprocessing as mp
import torch
done = mp.Event()
def extractor_worker(done_queue):
done_queue.put(torch.Tensor(10,10))
done_queue.put(None)
done.w… Hi, in your bash, your command should like:
CUDA_VISIBLE_DEVICES=0,3 python train.py
and in your train.py, the gpus config should be set [0,1]. Thanks for the explanation.
In this case, would you want to use the 10x10 pixels as the vector to calculate the cosine similarity?
Each channel would therefore hold a 100-dimensional vector pointing somewhere and you could calculate the similarity between the channels.
a = torch.randn(1, 2, 10, 1… | 919 | {'text': ['Hi, in your bash, your command should like:\n\nCUDA_VISIBLE_DEVICES=0,3 python train.py\n\nand in your train.py, the gpus config should be set [0,1].'], 'answer_start': [919]} |
Underrstanding cosine similarity function in pytorch | I have a little difficulty understanding what happens when we use pytorch cosine similarity function.
considering this example:
input1 = torch.abs(torch.randn(1,2,20, 20))
input2 = torch.abs(torch.randn(1,2,20, 20))
cos = nn.CosineSimilarity(dim=1, eps=1e-6)
output = cos(input1, input2)
print(outp… | 2 | 2018-11-18T01:30:37.395Z | Thanks for the explanation.
In this case, would you want to use the 10x10 pixels as the vector to calculate the cosine similarity?
Each channel would therefore hold a 100-dimensional vector pointing somewhere and you could calculate the similarity between the channels.
a = torch.randn(1, 2, 10, 1… | 2 | 2018-11-20T14:15:40.500Z | https://discuss.pytorch.org/t/underrstanding-cosine-similarity-function-in-pytorch/29865/11 | Your background process needs to be alive when the main process reads the tensor.
Here’s a small modification to your example:
import multiprocessing as mp
import torch
done = mp.Event()
def extractor_worker(done_queue):
done_queue.put(torch.Tensor(10,10))
done_queue.put(None)
done.w… Hi, in your bash, your command should like:
CUDA_VISIBLE_DEVICES=0,3 python train.py
and in your train.py, the gpus config should be set [0,1]. Thanks for the explanation.
In this case, would you want to use the 10x10 pixels as the vector to calculate the cosine similarity?
Each channel would therefore hold a 100-dimensional vector pointing somewhere and you could calculate the similarity between the channels.
a = torch.randn(1, 2, 10, 1… | 447 | {'text': ['Thanks for the explanation.\n\nIn this case, would you want to use the 10x10 pixels as the vector to calculate the cosine similarity?\n\nEach channel would therefore hold a 100-dimensional vector pointing somewhere and you could calculate the similarity between the channels.\n\na = torch.randn(1, 2, 10, 1…'], 'answer_start': [447]} |
Confusion matrix | Hello, I did FNN for 4 class classifications.
How is it possible to calculate confusion matrix? | 1 | 2018-07-11T13:19:18.000Z | To calculate the confusion matrix you need the class predictions. Currently it looks like pred contains the logits or probabilities for two classes.
Try to call torch.argmax(pred, 1) to get the predicted classes.
Here is a small example:
output = torch.randn(1, 2, 4, 4)
pred = torch.argmax(output… | 2 | 2018-12-31T14:46:31.394Z | https://discuss.pytorch.org/t/confusion-matrix/21026/9 | To calculate the confusion matrix you need the class predictions. Currently it looks like pred contains the logits or probabilities for two classes.
Try to call torch.argmax(pred, 1) to get the predicted classes.
Here is a small example:
output = torch.randn(1, 2, 4, 4)
pred = torch.argmax(output… You might want to add a print statement to check the object your model(seq) returns and make it contains what you expect. In your first use case (different number of input channels) you could add a conv layer before the pre-trained model and return 3 out_channels.
For different input sizes you could have a look at the <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py#L29">source code of vgg16</a>. There you could perform some model surgery and add an adaptive pooling layer in… | 1,510 | {'text': ['To calculate the confusion matrix you need the class predictions. Currently it looks like pred contains the logits or probabilities for two classes.\n\nTry to call torch.argmax(pred, 1) to get the predicted classes.\n\nHere is a small example:\n\noutput = torch.randn(1, 2, 4, 4)\n\npred = torch.argmax(output…'], 'answer_start': [1510]} |
ValueError: only one element tensors can be converted to Python scalars | Hey, guys! I’m using Google Colab and I’m facing this error and don’t know how to fix it. Can you help to solve this error: “ValueError: only one element tensors can be converted to Python scalars”
6 model.hidden = (torch.zeros(1, 1, model.hidden_layer_size),
7 … | 0 | 2019-11-01T21:09:06.860Z | You might want to add a print statement to check the object your model(seq) returns and make it contains what you expect. | 2 | 2019-11-01T21:47:06.083Z | https://discuss.pytorch.org/t/valueerror-only-one-element-tensors-can-be-converted-to-python-scalars/59800/4 | To calculate the confusion matrix you need the class predictions. Currently it looks like pred contains the logits or probabilities for two classes.
Try to call torch.argmax(pred, 1) to get the predicted classes.
Here is a small example:
output = torch.randn(1, 2, 4, 4)
pred = torch.argmax(output… You might want to add a print statement to check the object your model(seq) returns and make it contains what you expect. In your first use case (different number of input channels) you could add a conv layer before the pre-trained model and return 3 out_channels.
For different input sizes you could have a look at the <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py#L29">source code of vgg16</a>. There you could perform some model surgery and add an adaptive pooling layer in… | 1,065 | {'text': ['You might want to add a print statement to check the object your model(seq) returns and make it contains what you expect.'], 'answer_start': [1065]} |
Transfer learning usage with different input size | VGG16 and Resnet require input images to be of size 224X224X3. I know my question may be stupid, but is there any chance to use these pretrained networks on a datasets with different input sizes (for example, black and white images of size 224X224X1? or images of different size, which I don’t want t… | 1 | 2018-07-05T08:50:44.153Z | In your first use case (different number of input channels) you could add a conv layer before the pre-trained model and return 3 out_channels.
For different input sizes you could have a look at the <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py#L29">source code of vgg16</a>. There you could perform some model surgery and add an adaptive pooling layer in… | 3 | 2018-07-05T08:58:31.094Z | https://discuss.pytorch.org/t/transfer-learning-usage-with-different-input-size/20744/2 | To calculate the confusion matrix you need the class predictions. Currently it looks like pred contains the logits or probabilities for two classes.
Try to call torch.argmax(pred, 1) to get the predicted classes.
Here is a small example:
output = torch.randn(1, 2, 4, 4)
pred = torch.argmax(output… You might want to add a print statement to check the object your model(seq) returns and make it contains what you expect. In your first use case (different number of input channels) you could add a conv layer before the pre-trained model and return 3 out_channels.
For different input sizes you could have a look at the <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py#L29">source code of vgg16</a>. There you could perform some model surgery and add an adaptive pooling layer in… | 432 | {'text': ['In your first use case (different number of input channels) you could add a conv layer before the pre-trained model and return 3 out_channels.\n\nFor different input sizes you could have a look at the <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py#L29">source code of vgg16</a>. There you could perform some model surgery and add an adaptive pooling layer in…'], 'answer_start': [432]} |
How make customised dataset for semantic segmentation? | I have two dataset folder of tif images, one is a folder called BMMCdata, and the other one is the mask of BMMCdata images called BMMCmasks(the name of images are corresponds). I am trying to make a customised dataset and also split the data randomly to train and test. Thank you in advance. at the m… | 1 | 2018-11-29T14:34:57.960Z | Currently you are just returning the length of the path, not the number of images.
image_paths should be a list of all paths to your images.
You can get all image paths using the file extension and a wildcard:
folder_data = glob.glob("D:\\Neda\\Pytorch\\U-net\\BMMCdata\\data\\*.jpg")
folder_mask … | 2 | 2018-11-29T22:46:39.094Z | https://discuss.pytorch.org/t/how-make-customised-dataset-for-semantic-segmentation/30881/5 | Currently you are just returning the length of the path, not the number of images.
image_paths should be a list of all paths to your images.
You can get all image paths using the file extension and a wildcard:
folder_data = glob.glob("D:\\Neda\\Pytorch\\U-net\\BMMCdata\\data\\*.jpg")
folder_mask … I think, the example was written prior to the stable release of libtorch. The way you would implement the torch::nn::Module now is as follows
struct NetImpl : torch::nn::Module { // replaced Net by NetImpl
NetImpl() // replaced Net by NetImpl
: conv1(tor… Please uninstall cpuonly in your conda environment. If torch.version.cuda returns none, then it means that you are using a CPU only binary. | 1,660 | {'text': ['Currently you are just returning the length of the path, not the number of images.\n\nimage_paths should be a list of all paths to your images.\n\nYou can get all image paths using the file extension and a wildcard:\n\nfolder_data = glob.glob("D:\\\\Neda\\\\Pytorch\\\\U-net\\\\BMMCdata\\\\data\\\\*.jpg")\n\nfolder_mask …'], 'answer_start': [1660]} |
(libtorch) How to save model in MNIST cpp example? | I’m running mnist <a href="https://github.com/goldsborough/examples/tree/cpp/cpp/mnist" rel="nofollow noopener">example</a> and try to save trained model to disk:
torch::save(model, "model.pt") # save model using torch::save
Then got error as:
In file included from /home/christding/env/libtorch/include/torch/csrc/api/include/torch/all.h:8:0,
from /home/christding/env/libtor… | 2 | 2019-01-09T09:35:05.318Z | I think, the example was written prior to the stable release of libtorch. The way you would implement the torch::nn::Module now is as follows
struct NetImpl : torch::nn::Module { // replaced Net by NetImpl
NetImpl() // replaced Net by NetImpl
: conv1(tor… | 5 | 2019-04-05T08:13:17.416Z | https://discuss.pytorch.org/t/libtorch-how-to-save-model-in-mnist-cpp-example/34234/5 | Currently you are just returning the length of the path, not the number of images.
image_paths should be a list of all paths to your images.
You can get all image paths using the file extension and a wildcard:
folder_data = glob.glob("D:\\Neda\\Pytorch\\U-net\\BMMCdata\\data\\*.jpg")
folder_mask … I think, the example was written prior to the stable release of libtorch. The way you would implement the torch::nn::Module now is as follows
struct NetImpl : torch::nn::Module { // replaced Net by NetImpl
NetImpl() // replaced Net by NetImpl
: conv1(tor… Please uninstall cpuonly in your conda environment. If torch.version.cuda returns none, then it means that you are using a CPU only binary. | 1,150 | {'text': ['I think, the example was written prior to the stable release of libtorch. The way you would implement the torch::nn::Module now is as follows\n\nstruct NetImpl : torch::nn::Module { // replaced Net by NetImpl\n\nNetImpl() // replaced Net by NetImpl\n\n: conv1(tor…'], 'answer_start': [1150]} |
Torch CUDA is not available | Running following returns false:
import torch
torch.cuda.is_available()
nvidia-smi output: (driver version seems compatible with CUDA version)
±----------------------------------------------------------------------------+
| NVIDIA-SMI 442.19 Driver Version: 442.19 CUDA Version: 10.2… | 0 | 2020-03-30T21:15:03.688Z | Please uninstall cpuonly in your conda environment. If torch.version.cuda returns none, then it means that you are using a CPU only binary. | 0 | 2020-03-31T15:54:57.771Z | https://discuss.pytorch.org/t/torch-cuda-is-not-available/74845/4 | Currently you are just returning the length of the path, not the number of images.
image_paths should be a list of all paths to your images.
You can get all image paths using the file extension and a wildcard:
folder_data = glob.glob("D:\\Neda\\Pytorch\\U-net\\BMMCdata\\data\\*.jpg")
folder_mask … I think, the example was written prior to the stable release of libtorch. The way you would implement the torch::nn::Module now is as follows
struct NetImpl : torch::nn::Module { // replaced Net by NetImpl
NetImpl() // replaced Net by NetImpl
: conv1(tor… Please uninstall cpuonly in your conda environment. If torch.version.cuda returns none, then it means that you are using a CPU only binary. | 623 | {'text': ['Please uninstall cpuonly in your conda environment. If torch.version.cuda returns none, then it means that you are using a CPU only binary.'], 'answer_start': [623]} |
Any way to check if two tensors have the same base | OK, here is the example.
x = torch.randn(4, 4)
y = x.view(2,-1)
How can I make sure y has the same origin and different metadata comparing to the x, and there was no clone() operation involved, like there would be on:
x = torch.randn(4, 4)
y = x.clone().view(2,-1)
or in case on reshape() non con… | 2 | 2019-05-03T20:37:27.731Z | I’m not sure I fully understood your question, but I’ll try to answer:
import torch
x = torch.randn(4, 4)
y = x.view(2,-1)
print(x.data_ptr() == y.data_ptr()) # prints True
y = x.clone().view(2,-1)
print(x.data_ptr() == y.data_ptr()) # prints False
But it doesn’t work if you are interested in comp… | 8 | 2019-05-03T22:45:37.463Z | https://discuss.pytorch.org/t/any-way-to-check-if-two-tensors-have-the-same-base/44310/2 | I’m not sure I fully understood your question, but I’ll try to answer:
import torch
x = torch.randn(4, 4)
y = x.view(2,-1)
print(x.data_ptr() == y.data_ptr()) # prints True
y = x.clone().view(2,-1)
print(x.data_ptr() == y.data_ptr()) # prints False
But it doesn’t work if you are interested in comp… If the first iteration creates NaN gradients (e.g. due to a high scaling factor and thus gradient overflow), the optimizer.step() will be skipped and you might get this warning.
You could check the scaling factor via scaler.get_scale() and skip the learning rate scheduler, if it was decreased. I th… [image] jscriptcoder:
I’m still wondering why torch.version.cuda says 10.0.130 . I could try to installed that version of CUDA instead?
It you’ve installed a PyTorch binary, the local CUDA version will not be used.
Uninstall all binary installations and try to rebuild PyTorch with your local… | 1,524 | {'text': ['I’m not sure I fully understood your question, but I’ll try to answer:\n\nimport torch\n\nx = torch.randn(4, 4)\n\ny = x.view(2,-1)\n\nprint(x.data_ptr() == y.data_ptr()) # prints True\n\ny = x.clone().view(2,-1)\n\nprint(x.data_ptr() == y.data_ptr()) # prints False\n\nBut it doesn’t work if you are interested in comp…'], 'answer_start': [1524]} |
`optimizer.step()` before `lr_scheduler.step()` error using GradScaler | Even though I think my code calls the optimizer.step via Gradscaler function before the lr_scheduler.step() function I am still getting this warning:
/opt/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:123: UserWarning: Detected call of lr_scheduler.step() befor… | 2 | 2020-08-15T12:29:56.944Z | If the first iteration creates NaN gradients (e.g. due to a high scaling factor and thus gradient overflow), the optimizer.step() will be skipped and you might get this warning.
You could check the scaling factor via scaler.get_scale() and skip the learning rate scheduler, if it was decreased. I th… | 4 | 2020-08-18T06:50:04.804Z | https://discuss.pytorch.org/t/optimizer-step-before-lr-scheduler-step-error-using-gradscaler/92930/2 | I’m not sure I fully understood your question, but I’ll try to answer:
import torch
x = torch.randn(4, 4)
y = x.view(2,-1)
print(x.data_ptr() == y.data_ptr()) # prints True
y = x.clone().view(2,-1)
print(x.data_ptr() == y.data_ptr()) # prints False
But it doesn’t work if you are interested in comp… If the first iteration creates NaN gradients (e.g. due to a high scaling factor and thus gradient overflow), the optimizer.step() will be skipped and you might get this warning.
You could check the scaling factor via scaler.get_scale() and skip the learning rate scheduler, if it was decreased. I th… [image] jscriptcoder:
I’m still wondering why torch.version.cuda says 10.0.130 . I could try to installed that version of CUDA instead?
It you’ve installed a PyTorch binary, the local CUDA version will not be used.
Uninstall all binary installations and try to rebuild PyTorch with your local… | 1,076 | {'text': ['If the first iteration creates NaN gradients (e.g. due to a high scaling factor and thus gradient overflow), the optimizer.step() will be skipped and you might get this warning.\n\nYou could check the scaling factor via scaler.get_scale() and skip the learning rate scheduler, if it was decreased. I th…'], 'answer_start': [1076]} |
Unable to find a valid cuDNN algorithm to run convolution | I just got this message when trying to run a feed forward torch.nn.Conv2d, getting the following stacktrace:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-26-04bd4a00565d> in <mo… | 1 | 2020-04-27T20:43:39.253Z | [image] jscriptcoder:
I’m still wondering why torch.version.cuda says 10.0.130 . I could try to installed that version of CUDA instead?
It you’ve installed a PyTorch binary, the local CUDA version will not be used.
Uninstall all binary installations and try to rebuild PyTorch with your local… | 2 | 2020-05-03T00:48:10.718Z | https://discuss.pytorch.org/t/unable-to-find-a-valid-cudnn-algorithm-to-run-convolution/78724/11 | I’m not sure I fully understood your question, but I’ll try to answer:
import torch
x = torch.randn(4, 4)
y = x.view(2,-1)
print(x.data_ptr() == y.data_ptr()) # prints True
y = x.clone().view(2,-1)
print(x.data_ptr() == y.data_ptr()) # prints False
But it doesn’t work if you are interested in comp… If the first iteration creates NaN gradients (e.g. due to a high scaling factor and thus gradient overflow), the optimizer.step() will be skipped and you might get this warning.
You could check the scaling factor via scaler.get_scale() and skip the learning rate scheduler, if it was decreased. I th… [image] jscriptcoder:
I’m still wondering why torch.version.cuda says 10.0.130 . I could try to installed that version of CUDA instead?
It you’ve installed a PyTorch binary, the local CUDA version will not be used.
Uninstall all binary installations and try to rebuild PyTorch with your local… | 623 | {'text': ['[image] jscriptcoder:\n\nI’m still wondering why torch.version.cuda says 10.0.130 . I could try to installed that version of CUDA instead?\n\nIt you’ve installed a PyTorch binary, the local CUDA version will not be used.\n\nUninstall all binary installations and try to rebuild PyTorch with your local…'], 'answer_start': [623]} |
RuntimeError: running_mean should contain 64 elements not 96 | Hi,
i am trying to train pnasnet5large from scratch of my custom dataset and i am using pretrainedmodels package. i have modified my input and final layers as suggested in this site
<a href="https://github.com/Cadene/pretrained-models.pytorch" rel="nofollow noopener">https://github.com/Cadene/pretrained-models.pytorch</a>.
My code snippet
model = pnasnet5large (pretrained=“imagenet”) … | 1 | 2018-11-29T07:32:43.721Z | Based on the error message, it looks like the BatchNorm layer after conv1 is using 96 input channels, while you are passing 64.
Try to change the number of kernels to 96 and try it again. | 7 | 2018-11-29T12:09:38.161Z | https://discuss.pytorch.org/t/runtimeerror-running-mean-should-contain-64-elements-not-96/30846/2 | Based on the error message, it looks like the BatchNorm layer after conv1 is using 96 input channels, while you are passing 64.
Try to change the number of kernels to 96 and try it again. Yes, zero initial hiddenstate is standard so much so that it is the default in nn.LSTM if you don’t pass in a hidden state (rather than, e.g. throwing an error). Random initialization could also be used if zeros don’t work. Two basic ideas here:
If your hidden state evolution is “ergodic”, the … A guess would be that BatchNorm uses Bessel’s correction for variance and this makes it NaN (computed variance is 0, n / (n - 1) * var = 1 / 0 * 0 = NaN. | 1,852 | {'text': ['Based on the error message, it looks like the BatchNorm layer after conv1 is using 96 input channels, while you are passing 64.\n\nTry to change the number of kernels to 96 and try it again.'], 'answer_start': [1852]} |
Initialization of first hidden state in LSTM and truncated BPTT | Hi all,
I am trying to implement my first LSTM with pytorch and hence I am following some tutorials.
In particular I am following:
<a href="https://www.deeplearningwizard.com/deep_learning/practical_pytorch/pytorch_lstm_neuralnetwork/" class="onebox" target="_blank" rel="nofollow noopener">https://www.deeplearningwizard.com/deep_learning/practical_pytorch/pytorch_lstm_neuralnetwork/</a>
which looks like this:
class LSTMModel(nn.Module):
def __init__(s… | 1 | 2019-10-16T12:30:24.461Z | Yes, zero initial hiddenstate is standard so much so that it is the default in nn.LSTM if you don’t pass in a hidden state (rather than, e.g. throwing an error). Random initialization could also be used if zeros don’t work. Two basic ideas here:
If your hidden state evolution is “ergodic”, the … | 14 | 2019-10-17T11:50:28.856Z | https://discuss.pytorch.org/t/initialization-of-first-hidden-state-in-lstm-and-truncated-bptt/58384/2 | Based on the error message, it looks like the BatchNorm layer after conv1 is using 96 input channels, while you are passing 64.
Try to change the number of kernels to 96 and try it again. Yes, zero initial hiddenstate is standard so much so that it is the default in nn.LSTM if you don’t pass in a hidden state (rather than, e.g. throwing an error). Random initialization could also be used if zeros don’t work. Two basic ideas here:
If your hidden state evolution is “ergodic”, the … A guess would be that BatchNorm uses Bessel’s correction for variance and this makes it NaN (computed variance is 0, n / (n - 1) * var = 1 / 0 * 0 = NaN. | 1,115 | {'text': ['Yes, zero initial hiddenstate is standard so much so that it is the default in nn.LSTM if you don’t pass in a hidden state (rather than, e.g. throwing an error). Random initialization could also be used if zeros don’t work. Two basic ideas here:\n\nIf your hidden state evolution is “ergodic”, the …'], 'answer_start': [1115]} |
NaN when I use batch normalization (BatchNorm1d) | I made a module that uses the following MLP module:
class MLP(nn.Module):
def __init__(self, size_layers, activation):
super(MLP, self).__init__()
self.layers=[]
self.layersnorm = []
self.activation=activation
for i in range(len(size_layers)-1):
… | 2 | 2017-02-03T18:16:46.367Z | A guess would be that BatchNorm uses Bessel’s correction for variance and this makes it NaN (computed variance is 0, n / (n - 1) * var = 1 / 0 * 0 = NaN. | 4 | 2017-04-27T17:45:33.547Z | https://discuss.pytorch.org/t/nan-when-i-use-batch-normalization-batchnorm1d/322/9 | Based on the error message, it looks like the BatchNorm layer after conv1 is using 96 input channels, while you are passing 64.
Try to change the number of kernels to 96 and try it again. Yes, zero initial hiddenstate is standard so much so that it is the default in nn.LSTM if you don’t pass in a hidden state (rather than, e.g. throwing an error). Random initialization could also be used if zeros don’t work. Two basic ideas here:
If your hidden state evolution is “ergodic”, the … A guess would be that BatchNorm uses Bessel’s correction for variance and this makes it NaN (computed variance is 0, n / (n - 1) * var = 1 / 0 * 0 = NaN. | 495 | {'text': ['A guess would be that BatchNorm uses Bessel’s correction for variance and this makes it NaN (computed variance is 0, n / (n - 1) * var = 1 / 0 * 0 = NaN.'], 'answer_start': [495]} |
1only batches of spatial targets supported (non-empty 3D tensors) but got targets of size: : [1, 3, 375, 1242] | My batch size is 1, my image is 375 *1242 . *3, I’ve changed the numpy image to tensor image as 3 . 375 . 1242.
When I call
criterion = nn.CrossEntropyLoss().cuda()
loss = criterion(outputs, labels.long())
The error is showing as the tiltle:
RuntimeError: 1only batches of spatial targets suppo… | 1 | 2018-12-20T01:37:14.420Z | Problem solved, here is my code:
colors_all = torch.tensor([])
for i in range(len(mask_list)):
mask_str = mask_list[i]
mask_arr = io.imread(os.path.join(mask_dir, mask_str))
mask_tensor = torch.from_numpy(mask_arr)
mask_tensor = mask_tensor.permute(2,0,1)
# print(mask_tenso… | 1 | 2019-01-10T10:35:09.667Z | https://discuss.pytorch.org/t/1only-batches-of-spatial-targets-supported-non-empty-3d-tensors-but-got-targets-of-size-1-3-375-1242/32609/35 | Problem solved, here is my code:
colors_all = torch.tensor([])
for i in range(len(mask_list)):
mask_str = mask_list[i]
mask_arr = io.imread(os.path.join(mask_dir, mask_str))
mask_tensor = torch.from_numpy(mask_arr)
mask_tensor = mask_tensor.permute(2,0,1)
# print(mask_tenso… It is easier if you count the number of zero elements in that dimension
x = torch.randn(5, 7)
x[x<0] = 0
x = x.sort(dim=1)[0] # You forgot that sort returns a pair
first_nonzero = (x == 0).sum(dim=1)
Even easier, you can skip the x[x<0] = 0 line and count the non-positive elements:
x = torch.rand… Yeah, I understand the issue and stumbled myself a few times over it.
I think one possible approach would be to use shared memory in Python e.g. with <a href="https://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes">multiprocessing.Array</a>.
You could initialize an array of your known size for the complete Dataset, fill it in the first iteration using all workers, … | 1,296 | {'text': ['Problem solved, here is my code:\n\ncolors_all = torch.tensor([])\n\nfor i in range(len(mask_list)):\n\nmask_str = mask_list[i]\n\nmask_arr = io.imread(os.path.join(mask_dir, mask_str))\n\nmask_tensor = torch.from_numpy(mask_arr)\n\nmask_tensor = mask_tensor.permute(2,0,1)\n\n# print(mask_tenso…'], 'answer_start': [1296]} |
First nonzero index | I have a batch of N rows each of M values that are sorted along dim=1. For each row, I want to find the first nonzero element index from M sorted values. I’d like to do it efficiently without the for-loop.
x = torch.randn(5, 7)
x[x<0] = 0
x = x.sort(dim=1)
first_nonzero = f(x) | 1 | 2018-09-09T07:30:05.237Z | It is easier if you count the number of zero elements in that dimension
x = torch.randn(5, 7)
x[x<0] = 0
x = x.sort(dim=1)[0] # You forgot that sort returns a pair
first_nonzero = (x == 0).sum(dim=1)
Even easier, you can skip the x[x<0] = 0 line and count the non-positive elements:
x = torch.rand… | 6 | 2018-09-09T11:35:20.344Z | https://discuss.pytorch.org/t/first-nonzero-index/24769/4 | Problem solved, here is my code:
colors_all = torch.tensor([])
for i in range(len(mask_list)):
mask_str = mask_list[i]
mask_arr = io.imread(os.path.join(mask_dir, mask_str))
mask_tensor = torch.from_numpy(mask_arr)
mask_tensor = mask_tensor.permute(2,0,1)
# print(mask_tenso… It is easier if you count the number of zero elements in that dimension
x = torch.randn(5, 7)
x[x<0] = 0
x = x.sort(dim=1)[0] # You forgot that sort returns a pair
first_nonzero = (x == 0).sum(dim=1)
Even easier, you can skip the x[x<0] = 0 line and count the non-positive elements:
x = torch.rand… Yeah, I understand the issue and stumbled myself a few times over it.
I think one possible approach would be to use shared memory in Python e.g. with <a href="https://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes">multiprocessing.Array</a>.
You could initialize an array of your known size for the complete Dataset, fill it in the first iteration using all workers, … | 942 | {'text': ['It is easier if you count the number of zero elements in that dimension\n\nx = torch.randn(5, 7)\n\nx[x<0] = 0\n\nx = x.sort(dim=1)[0] # You forgot that sort returns a pair\n\nfirst_nonzero = (x == 0).sum(dim=1)\n\nEven easier, you can skip the x[x<0] = 0 line and count the non-positive elements:\n\nx = torch.rand…'], 'answer_start': [942]} |
Dataloader resets dataset state | I’ve implemented a custom dataset which generates and then caches the data for reuse.
If I use the DataLoader with num_workers=0 the first epoch is slow, as the data is generated during this time, but later the caching works and the training proceeds fast.
With a higher number of workers, the firs… | 2 | 2018-10-24T18:13:53.411Z | Yeah, I understand the issue and stumbled myself a few times over it.
I think one possible approach would be to use shared memory in Python e.g. with <a href="https://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes">multiprocessing.Array</a>.
You could initialize an array of your known size for the complete Dataset, fill it in the first iteration using all workers, … | 10 | 2018-10-24T23:21:37.985Z | https://discuss.pytorch.org/t/dataloader-resets-dataset-state/27960/4 | Problem solved, here is my code:
colors_all = torch.tensor([])
for i in range(len(mask_list)):
mask_str = mask_list[i]
mask_arr = io.imread(os.path.join(mask_dir, mask_str))
mask_tensor = torch.from_numpy(mask_arr)
mask_tensor = mask_tensor.permute(2,0,1)
# print(mask_tenso… It is easier if you count the number of zero elements in that dimension
x = torch.randn(5, 7)
x[x<0] = 0
x = x.sort(dim=1)[0] # You forgot that sort returns a pair
first_nonzero = (x == 0).sum(dim=1)
Even easier, you can skip the x[x<0] = 0 line and count the non-positive elements:
x = torch.rand… Yeah, I understand the issue and stumbled myself a few times over it.
I think one possible approach would be to use shared memory in Python e.g. with <a href="https://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes">multiprocessing.Array</a>.
You could initialize an array of your known size for the complete Dataset, fill it in the first iteration using all workers, … | 612 | {'text': ['Yeah, I understand the issue and stumbled myself a few times over it.\n\nI think one possible approach would be to use shared memory in Python e.g. with <a href="https://docs.python.org/2/library/multiprocessing.html#sharing-state-between-processes">multiprocessing.Array</a>.\n\nYou could initialize an array of your known size for the complete Dataset, fill it in the first iteration using all workers, …'], 'answer_start': [612]} |
Problem with my checkpoint file when using torch.load() | Hi, I have a problem loading my checkpoint file(.pth). It’s all right when I load my other checkpoint files but not with this. Here’s how I save the model:
def save_networks(self, epoch):
"""Save all the networks to the disk.
Parameters:
epoch (int) -- current epoch… | 1 | 2020-08-15T04:25:53.814Z | When you have to inference with a pytorch version below 1.6, try code below to convert your model, because pytorch changed the model saving format after version 1.6.
torch.save(model.state_dict(), path, _use_new_zipfile_serialization=False) | 5 | 2020-12-03T10:21:23.924Z | https://discuss.pytorch.org/t/problem-with-my-checkpoint-file-when-using-torch-load/92903/5 | When you have to inference with a pytorch version below 1.6, try code below to convert your model, because pytorch changed the model saving format after version 1.6.
torch.save(model.state_dict(), path, _use_new_zipfile_serialization=False) You are currently summing all correctly predicted pixels and divide it by the batch size. To get a valid accuracy between 0 and 100% you should divide correct_train by the number of pixels in your batch.
Try to calculate total_train as total_train += mask.nelement(). Hi <a class="mention" href="/u/shirui-japina">@shirui-japina</a>,
There is actually a guy called <a href="https://arxiv.org/search/cs?searchtype=author&query=Smith%2C+L+N" rel="nofollow noopener">Leslie N. Smith</a> who created this <a href="https://arxiv.org/abs/1506.01186" rel="nofollow noopener">paper</a>.
Based on this paper, some other guy created the <a href="https://docs.fast.ai/callbacks.lr_finder.html" rel="nofollow noopener">learning rate finder</a>.
What, it does, it measures the loss for the different learning rates and plots the diagram as this one:
[image]
It shows up (empiricall… | 2,042 | {'text': ['When you have to inference with a pytorch version below 1.6, try code below to convert your model, because pytorch changed the model saving format after version 1.6.\n\ntorch.save(model.state_dict(), path, _use_new_zipfile_serialization=False)'], 'answer_start': [2042]} |
Calculate train accuracy of the model in segmentation task | I think I don’t have a good understanding of train accuracy. This is the snippet for train the model and calculates the loss and train accuracy for segmentation task.
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0
total_train = 0
co… | 1 | 2019-01-02T10:38:03.217Z | You are currently summing all correctly predicted pixels and divide it by the batch size. To get a valid accuracy between 0 and 100% you should divide correct_train by the number of pixels in your batch.
Try to calculate total_train as total_train += mask.nelement(). | 3 | 2019-01-02T11:51:07.363Z | https://discuss.pytorch.org/t/calculate-train-accuracy-of-the-model-in-segmentation-task/33581/2 | When you have to inference with a pytorch version below 1.6, try code below to convert your model, because pytorch changed the model saving format after version 1.6.
torch.save(model.state_dict(), path, _use_new_zipfile_serialization=False) You are currently summing all correctly predicted pixels and divide it by the batch size. To get a valid accuracy between 0 and 100% you should divide correct_train by the number of pixels in your batch.
Try to calculate total_train as total_train += mask.nelement(). Hi <a class="mention" href="/u/shirui-japina">@shirui-japina</a>,
There is actually a guy called <a href="https://arxiv.org/search/cs?searchtype=author&query=Smith%2C+L+N" rel="nofollow noopener">Leslie N. Smith</a> who created this <a href="https://arxiv.org/abs/1506.01186" rel="nofollow noopener">paper</a>.
Based on this paper, some other guy created the <a href="https://docs.fast.ai/callbacks.lr_finder.html" rel="nofollow noopener">learning rate finder</a>.
What, it does, it measures the loss for the different learning rates and plots the diagram as this one:
[image]
It shows up (empiricall… | 1,263 | {'text': ['You are currently summing all correctly predicted pixels and divide it by the batch size. To get a valid accuracy between 0 and 100% you should divide correct_train by the number of pixels in your batch.\n\nTry to calculate total_train as total_train += mask.nelement().'], 'answer_start': [1263]} |
Get the best learning rate automatically | It is very difficult to adjust the best hyper-parameters in the process of studying the deep learning model.:cold_face::scream:
Is there some great function in PyTorch to get the best learning rate?:thinking: | 1 | 2019-10-15T09:40:25.756Z | Hi <a class="mention" href="/u/shirui-japina">@shirui-japina</a>,
There is actually a guy called <a href="https://arxiv.org/search/cs?searchtype=author&query=Smith%2C+L+N" rel="nofollow noopener">Leslie N. Smith</a> who created this <a href="https://arxiv.org/abs/1506.01186" rel="nofollow noopener">paper</a>.
Based on this paper, some other guy created the <a href="https://docs.fast.ai/callbacks.lr_finder.html" rel="nofollow noopener">learning rate finder</a>.
What, it does, it measures the loss for the different learning rates and plots the diagram as this one:
[image]
It shows up (empiricall… | 10 | 2019-10-15T17:35:31.985Z | https://discuss.pytorch.org/t/get-the-best-learning-rate-automatically/58269/4 | When you have to inference with a pytorch version below 1.6, try code below to convert your model, because pytorch changed the model saving format after version 1.6.
torch.save(model.state_dict(), path, _use_new_zipfile_serialization=False) You are currently summing all correctly predicted pixels and divide it by the batch size. To get a valid accuracy between 0 and 100% you should divide correct_train by the number of pixels in your batch.
Try to calculate total_train as total_train += mask.nelement(). Hi <a class="mention" href="/u/shirui-japina">@shirui-japina</a>,
There is actually a guy called <a href="https://arxiv.org/search/cs?searchtype=author&query=Smith%2C+L+N" rel="nofollow noopener">Leslie N. Smith</a> who created this <a href="https://arxiv.org/abs/1506.01186" rel="nofollow noopener">paper</a>.
Based on this paper, some other guy created the <a href="https://docs.fast.ai/callbacks.lr_finder.html" rel="nofollow noopener">learning rate finder</a>.
What, it does, it measures the loss for the different learning rates and plots the diagram as this one:
[image]
It shows up (empiricall… | 511 | {'text': ['Hi <a class="mention" href="/u/shirui-japina">@shirui-japina</a>,\n\nThere is actually a guy called <a href="https://arxiv.org/search/cs?searchtype=author&query=Smith%2C+L+N" rel="nofollow noopener">Leslie N. Smith</a> who created this <a href="https://arxiv.org/abs/1506.01186" rel="nofollow noopener">paper</a>.\n\nBased on this paper, some other guy created the <a href="https://docs.fast.ai/callbacks.lr_finder.html" rel="nofollow noopener">learning rate finder</a>.\n\nWhat, it does, it measures the loss for the different learning rates and plots the diagram as this one:\n\n[image]\n\nIt shows up (empiricall…'], 'answer_start': [511]} |
Embedding Error Index out of Range in self | I tried the stackoverflow and other threads in forum but still my issues wasn’t resolved. I am a starter please help me understand what went wrong.
id_2_token = dict(enumerate(set(n for name in names for n in name),1))
token_2_id = {value:key for key,value in id_2_token.items()}
print(len(id_2_toke… | 1 | 2020-05-16T19:22:25.863Z | [image] csblacknet:
The highest value in that batch was 53 while my vocab(token_2_id) size is 56. What if another batch comes up with the highest value other than 53, what will happen then? How will I resolve that problem?
You cannot pass indices higher than embedding_dim-1, since the embeddi… | 5 | 2020-05-18T06:41:41.430Z | https://discuss.pytorch.org/t/embedding-error-index-out-of-range-in-self/81550/4 | [image] csblacknet:
The highest value in that batch was 53 while my vocab(token_2_id) size is 56. What if another batch comes up with the highest value other than 53, what will happen then? How will I resolve that problem?
You cannot pass indices higher than embedding_dim-1, since the embeddi… Both models work identically as seen here:
model1 = Net()
model2 = Net2()
model2.load_state_dict(model1.state_dict())
x = torch.randn(1, 3, 24, 24)
outputs1 = model1(x)
outputs2 = model2(x)
# Compare outputs
print((outputs1[0] == outputs2[0]).all())
print((outputs1[1] == outputs2[1]).all())
# C… This is the way I found works:
# generating uniform variables
import numpy as np
num_samples = 3
Din = 1
lb, ub = -1, 1
xn = np.random.uniform(low=lb, high=ub, size=(num_samples,Din))
print(xn)
import torch
sampler = torch.distributions.Uniform(low=lb, high=ub)
r = sampler.sample((num_samples,… | 2,256 | {'text': ['[image] csblacknet:\n\nThe highest value in that batch was 53 while my vocab(token_2_id) size is 56. What if another batch comes up with the highest value other than 53, what will happen then? How will I resolve that problem?\n\nYou cannot pass indices higher than embedding_dim-1, since the embeddi…'], 'answer_start': [2256]} |
Confusion about using .clone | considering these two nets:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, kernel_size=1, stride=1, bias=False)
self.conv2 = nn.Conv2d(6, 6, kernel_size=1, stride=1, bias=False)
self.conv3 = nn.Conv2d(… | 0 | 2019-03-12T20:20:15.413Z | Both models work identically as seen here:
model1 = Net()
model2 = Net2()
model2.load_state_dict(model1.state_dict())
x = torch.randn(1, 3, 24, 24)
outputs1 = model1(x)
outputs2 = model2(x)
# Compare outputs
print((outputs1[0] == outputs2[0]).all())
print((outputs1[1] == outputs2[1]).all())
# C… | 5 | 2019-03-12T23:43:24.837Z | https://discuss.pytorch.org/t/confusion-about-using-clone/39673/2 | [image] csblacknet:
The highest value in that batch was 53 while my vocab(token_2_id) size is 56. What if another batch comes up with the highest value other than 53, what will happen then? How will I resolve that problem?
You cannot pass indices higher than embedding_dim-1, since the embeddi… Both models work identically as seen here:
model1 = Net()
model2 = Net2()
model2.load_state_dict(model1.state_dict())
x = torch.randn(1, 3, 24, 24)
outputs1 = model1(x)
outputs2 = model2(x)
# Compare outputs
print((outputs1[0] == outputs2[0]).all())
print((outputs1[1] == outputs2[1]).all())
# C… This is the way I found works:
# generating uniform variables
import numpy as np
num_samples = 3
Din = 1
lb, ub = -1, 1
xn = np.random.uniform(low=lb, high=ub, size=(num_samples,Din))
print(xn)
import torch
sampler = torch.distributions.Uniform(low=lb, high=ub)
r = sampler.sample((num_samples,… | 1,432 | {'text': ['Both models work identically as seen here:\n\nmodel1 = Net()\n\nmodel2 = Net2()\n\nmodel2.load_state_dict(model1.state_dict())\n\nx = torch.randn(1, 3, 24, 24)\n\noutputs1 = model1(x)\n\noutputs2 = model2(x)\n\n# Compare outputs\n\nprint((outputs1[0] == outputs2[0]).all())\n\nprint((outputs1[1] == outputs2[1]).all())\n\n# C…'], 'answer_start': [1432]} |
Generating random tensors according to the uniform distribution pytorch? | I saw:
<a href="https://stackoverflow.com/users/4933403/bishwajit-purkaystha" target="_blank" rel="nofollow noopener">
[Bishwajit Purkaystha]
</a>
<a href="https://stackoverflow.com/questions/44328530/how-to-get-a-uniform-distribution-in-a-range-r1-r2-in-pytorch" target="_blank" rel="nofollow noopener">How to get a uniform distribution in a range [r1,r2] in PyTorch?</a>
pytorch, uniform-distribution
asked by
<a href="https://stackoverflow.com/users/4933403/bishwajit-purkaystha" target="_blank" rel="nofollow noopener">
Bishwajit Purkaystha
</a>
on <a href="https://stackoverflow.com/questions/44328530/how-to-get-a-uniform-distribution-in-a-range-r1-r2-in-pytorch" target="_blank" rel="nofollow noopener">12:05PM - 02 Jun 17 UTC</a>
and thought that was a strange way to do it.
… | 2 | 2019-08-09T20:45:19.984Z | This is the way I found works:
# generating uniform variables
import numpy as np
num_samples = 3
Din = 1
lb, ub = -1, 1
xn = np.random.uniform(low=lb, high=ub, size=(num_samples,Din))
print(xn)
import torch
sampler = torch.distributions.Uniform(low=lb, high=ub)
r = sampler.sample((num_samples,… | 0 | 2020-07-15T16:32:51.745Z | https://discuss.pytorch.org/t/generating-random-tensors-according-to-the-uniform-distribution-pytorch/53030/8 | [image] csblacknet:
The highest value in that batch was 53 while my vocab(token_2_id) size is 56. What if another batch comes up with the highest value other than 53, what will happen then? How will I resolve that problem?
You cannot pass indices higher than embedding_dim-1, since the embeddi… Both models work identically as seen here:
model1 = Net()
model2 = Net2()
model2.load_state_dict(model1.state_dict())
x = torch.randn(1, 3, 24, 24)
outputs1 = model1(x)
outputs2 = model2(x)
# Compare outputs
print((outputs1[0] == outputs2[0]).all())
print((outputs1[1] == outputs2[1]).all())
# C… This is the way I found works:
# generating uniform variables
import numpy as np
num_samples = 3
Din = 1
lb, ub = -1, 1
xn = np.random.uniform(low=lb, high=ub, size=(num_samples,Din))
print(xn)
import torch
sampler = torch.distributions.Uniform(low=lb, high=ub)
r = sampler.sample((num_samples,… | 618 | {'text': ['This is the way I found works:\n\n# generating uniform variables\n\nimport numpy as np\n\nnum_samples = 3\n\nDin = 1\n\nlb, ub = -1, 1\n\nxn = np.random.uniform(low=lb, high=ub, size=(num_samples,Din))\n\nprint(xn)\n\nimport torch\n\nsampler = torch.distributions.Uniform(low=lb, high=ub)\n\nr = sampler.sample((num_samples,…'], 'answer_start': [618]} |
Tf.extract_image_patches in pytorch | Is there a function like tf.extract_image_patches in pytorch?
Thank you | 0 | 2019-04-28T13:11:53.390Z | I’m not sure why the method is called extract_image_patches if you won’t get the patches, but apparently a view of [batch_size, height, width, channels*kernel_height*kernel_width].
However, this code should yield the same result in PyTorch:
import torch
import torch.nn.functional as F
batch_size … | 15 | 2019-04-28T21:33:37.984Z | https://discuss.pytorch.org/t/tf-extract-image-patches-in-pytorch/43837/8 | I’m not sure why the method is called extract_image_patches if you won’t get the patches, but apparently a view of [batch_size, height, width, channels*kernel_height*kernel_width].
However, this code should yield the same result in PyTorch:
import torch
import torch.nn.functional as F
batch_size … I install CUDA 9.2 using .sh script instead of CUDA 9.0 using .dep package. Hi, I found the solution to this problem. I forgot to use ModuleList in class defining Residual Block. When I added it, the code ran perfectly. Here’s the modified code:
# Residual Block
class DenseResidual(torch.nn.Module):
def __init__(self, inp_dim, neurons, layers, **kwargs):
super(… | 1,860 | {'text': ['I’m not sure why the method is called extract_image_patches if you won’t get the patches, but apparently a view of [batch_size, height, width, channels*kernel_height*kernel_width].\n\nHowever, this code should yield the same result in PyTorch:\n\nimport torch\n\nimport torch.nn.functional as F\n\nbatch_size …'], 'answer_start': [1860]} |
Cuda runtime error (11) | CUDA: 9.0
CUDNN: 7.4
GPU: 2080ti
Python: 3.6
Installed pytorch via “pip3 install torch torchvision”
I have THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=663 error=11 : invalid argument error while run
import torch
from torchvision.models import vgg16
model = vgg16().cuda(… | 2 | 2018-11-20T13:34:33.288Z | I install CUDA 9.2 using .sh script instead of CUDA 9.0 using .dep package. | 1 | 2018-12-10T07:48:52.567Z | https://discuss.pytorch.org/t/cuda-runtime-error-11/30080/6 | I’m not sure why the method is called extract_image_patches if you won’t get the patches, but apparently a view of [batch_size, height, width, channels*kernel_height*kernel_width].
However, this code should yield the same result in PyTorch:
import torch
import torch.nn.functional as F
batch_size … I install CUDA 9.2 using .sh script instead of CUDA 9.0 using .dep package. Hi, I found the solution to this problem. I forgot to use ModuleList in class defining Residual Block. When I added it, the code ran perfectly. Here’s the modified code:
# Residual Block
class DenseResidual(torch.nn.Module):
def __init__(self, inp_dim, neurons, layers, **kwargs):
super(… | 1,240 | {'text': ['I install CUDA 9.2 using .sh script instead of CUDA 9.0 using .dep package.'], 'answer_start': [1240]} |
RuntimeError: Tensor for 'out' is on CPU, Tensor for argument #1 'self' is on CPU, but expected them to be on GPU (while checking arguments for addmm) | Hi
I defined a ResNet as follows
# Residual Block
class DenseResidual(torch.nn.Module):
def __init__(self, inp_dim, neurons, layers, **kwargs):
super(DenseResidual, self).__init__(**kwargs)
self.h1 = torch.nn.Linear(inp_dim, neurons)
self.hidden = [torch.nn.Linear(neuro… | 2 | 2020-12-07T17:38:33.941Z | Hi, I found the solution to this problem. I forgot to use ModuleList in class defining Residual Block. When I added it, the code ran perfectly. Here’s the modified code:
# Residual Block
class DenseResidual(torch.nn.Module):
def __init__(self, inp_dim, neurons, layers, **kwargs):
super(… | 3 | 2020-12-09T15:03:18.754Z | https://discuss.pytorch.org/t/runtimeerror-tensor-for-out-is-on-cpu-tensor-for-argument-1-self-is-on-cpu-but-expected-them-to-be-on-gpu-while-checking-arguments-for-addmm/105453/7 | I’m not sure why the method is called extract_image_patches if you won’t get the patches, but apparently a view of [batch_size, height, width, channels*kernel_height*kernel_width].
However, this code should yield the same result in PyTorch:
import torch
import torch.nn.functional as F
batch_size … I install CUDA 9.2 using .sh script instead of CUDA 9.0 using .dep package. Hi, I found the solution to this problem. I forgot to use ModuleList in class defining Residual Block. When I added it, the code ran perfectly. Here’s the modified code:
# Residual Block
class DenseResidual(torch.nn.Module):
def __init__(self, inp_dim, neurons, layers, **kwargs):
super(… | 386 | {'text': ['Hi, I found the solution to this problem. I forgot to use ModuleList in class defining Residual Block. When I added it, the code ran perfectly. Here’s the modified code:\n\n# Residual Block\n\nclass DenseResidual(torch.nn.Module):\n\ndef __init__(self, inp_dim, neurons, layers, **kwargs):\n\nsuper(…'], 'answer_start': [386]} |
AttributeError: 'numpy.ndarray' object has no attribute 'numpy' | <a class="mention" href="/u/ptrblck">@ptrblck</a>, Hi!
I’m trying to visualize the adversarial images generated by this script:
<a href="https://pytorch.org/tutorials/beginner/fgsm_tutorial.html" class="onebox" target="_blank" rel="nofollow noopener">https://pytorch.org/tutorials/beginner/fgsm_tutorial.html</a>
This tutorial is used for the mnist data. Now I want to use for other data which is trained using the inception_v1 architecture, below is the gist for t… | 0 | 2019-04-09T12:08:09.624Z | adv_ex is already a numpy array, so you can’t call .numpy() again on it (which is a tensor method).
Store adv_ex as a tensor or avoid calling numpy on it:
adv_ex = perturbed_data.squeeze().detach().cpu()
adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) ) | 1 | 2019-04-09T14:19:02.361Z | https://discuss.pytorch.org/t/attributeerror-numpy-ndarray-object-has-no-attribute-numpy/42062/2 | adv_ex is already a numpy array, so you can’t call .numpy() again on it (which is a tensor method).
Store adv_ex as a tensor or avoid calling numpy on it:
adv_ex = perturbed_data.squeeze().detach().cpu()
adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) ) You have most likely another PyTorch installation with CUDA10.2 in your current environment, which conflicts with the new one.
Try to either uninstall all source builds, pip wheels, and conda binaries in the current environment or create a new virtual environment and reinstall PyTorch again. I’m not sure if I understand it correctly, but I think this will do it:
a = torch.randn(5, 3)
b = torch.randn(5, 3)
res = a.unsqueeze(1) - b
# res[i] corresponds to (a[i] - b) | 1,370 | {'text': ['adv_ex is already a numpy array, so you can’t call .numpy() again on it (which is a tensor method).\n\nStore adv_ex as a tensor or avoid calling numpy on it:\n\nadv_ex = perturbed_data.squeeze().detach().cpu()\n\nadv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) )'], 'answer_start': [1370]} |
GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation | I installed PyTorch with
pip3 install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html
And then in a Python session I ran:
import torch
torch.tensor(1).cuda()
which then raised the warning in the title.
I know this supposedly works for peop… | 1 | 2021-06-07T13:16:22.602Z | You have most likely another PyTorch installation with CUDA10.2 in your current environment, which conflicts with the new one.
Try to either uninstall all source builds, pip wheels, and conda binaries in the current environment or create a new virtual environment and reinstall PyTorch again. | 1 | 2021-08-25T08:31:18.633Z | https://discuss.pytorch.org/t/geforce-rtx-3090-with-cuda-capability-sm-86-is-not-compatible-with-the-current-pytorch-installation/123499/10 | adv_ex is already a numpy array, so you can’t call .numpy() again on it (which is a tensor method).
Store adv_ex as a tensor or avoid calling numpy on it:
adv_ex = perturbed_data.squeeze().detach().cpu()
adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) ) You have most likely another PyTorch installation with CUDA10.2 in your current environment, which conflicts with the new one.
Try to either uninstall all source builds, pip wheels, and conda binaries in the current environment or create a new virtual environment and reinstall PyTorch again. I’m not sure if I understand it correctly, but I think this will do it:
a = torch.randn(5, 3)
b = torch.randn(5, 3)
res = a.unsqueeze(1) - b
# res[i] corresponds to (a[i] - b) | 961 | {'text': ['You have most likely another PyTorch installation with CUDA10.2 in your current environment, which conflicts with the new one.\n\nTry to either uninstall all source builds, pip wheels, and conda binaries in the current environment or create a new virtual environment and reinstall PyTorch again.'], 'answer_start': [961]} |
How to calculate pair-wise differences between two tensors in a vectorized way? | I have two tensors of shape (4096, 3) and (4096,3). What I’d like to do is calculate the pairwise differences between all of the individual vectors in those matrices, such that I end up with a (4096, 4096, 3) tensor. This can be done in for-loops, but I’d like to do a vectorized approach. NumPy lets… | 1 | 2019-02-17T16:18:23.304Z | I’m not sure if I understand it correctly, but I think this will do it:
a = torch.randn(5, 3)
b = torch.randn(5, 3)
res = a.unsqueeze(1) - b
# res[i] corresponds to (a[i] - b) | 8 | 2019-02-18T01:11:38.095Z | https://discuss.pytorch.org/t/how-to-calculate-pair-wise-differences-between-two-tensors-in-a-vectorized-way/37451/2 | adv_ex is already a numpy array, so you can’t call .numpy() again on it (which is a tensor method).
Store adv_ex as a tensor or avoid calling numpy on it:
adv_ex = perturbed_data.squeeze().detach().cpu()
adv_examples.append( (init_pred.item(), final_pred.item(), adv_ex) ) You have most likely another PyTorch installation with CUDA10.2 in your current environment, which conflicts with the new one.
Try to either uninstall all source builds, pip wheels, and conda binaries in the current environment or create a new virtual environment and reinstall PyTorch again. I’m not sure if I understand it correctly, but I think this will do it:
a = torch.randn(5, 3)
b = torch.randn(5, 3)
res = a.unsqueeze(1) - b
# res[i] corresponds to (a[i] - b) | 570 | {'text': ['I’m not sure if I understand it correctly, but I think this will do it:\n\na = torch.randn(5, 3)\n\nb = torch.randn(5, 3)\n\nres = a.unsqueeze(1) - b\n\n# res[i] corresponds to (a[i] - b)'], 'answer_start': [570]} |
About bidirectional gru with seq2seq example and some modifications | Hi. I’m really new to pytorch. I was experimenting with code I found here:
<a href="http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html#sphx-glr-intermediate-seq2seq-translation-tutorial-py" class="onebox" target="_blank" rel="nofollow noopener">http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html#sphx-glr-intermediate-seq2seq-translation-tutorial-py</a>
I’m trying to replace the EncoderRNN with a bidirectional version. Here’s my code.
class Enc… | 2 | 2018-03-27T22:05:01.210Z | If you’re going to pass an encoder_hidden to your decoder you don’t even need the initHidden method. Your gru will automatically set the initial hidden state to zero, process the whole sequence and pop out an output and hidden_state.
There are a few ways you can pass these to a decoder. The easiest… | 1 | 2018-03-28T23:20:50.686Z | https://discuss.pytorch.org/t/about-bidirectional-gru-with-seq2seq-example-and-some-modifications/15588/5 | If you’re going to pass an encoder_hidden to your decoder you don’t even need the initHidden method. Your gru will automatically set the initial hidden state to zero, process the whole sequence and pop out an output and hidden_state.
There are a few ways you can pass these to a decoder. The easiest… We need graph leaves to be able to compute gradients of final tensor w.r.t. them. Leaf nodes are not functions or simply put, have not been obtained from mathematical operations. For instance, in a nn.Linear(in, out) module, weight and bias are leaf nodes so when you call .backward on a loss functio… Hi,
This warning only means that you are accessing the .grad field of a Tensor for which pytorch will never populate the .grad field.
You can run your code with python -W error your_script.py to make python error out when the warning happens and so show you where it happens exactly.
The gist of t… | 1,498 | {'text': ['If you’re going to pass an encoder_hidden to your decoder you don’t even need the initHidden method. Your gru will automatically set the initial hidden state to zero, process the whole sequence and pop out an output and hidden_state.\n\nThere are a few ways you can pass these to a decoder. The easiest…'], 'answer_start': [1498]} |
What is the purpose of `is_leaf`? | All Tensors that have <a href="https://pytorch.org/docs/master/autograd.html#torch.Tensor.requires_grad" rel="nofollow noopener"> requires_grad </a> which is False will be leaf Tensors by convention.
For Tensors that have <a href="https://pytorch.org/docs/master/autograd.html#torch.Tensor.requires_grad" rel="nofollow noopener"> requires_grad </a> which is True , they will be leaf Tensors if they were created by the user. This means that they are not the result of an operation and so grad_fn is None.
Only … | 1 | 2020-06-26T08:03:10.276Z | We need graph leaves to be able to compute gradients of final tensor w.r.t. them. Leaf nodes are not functions or simply put, have not been obtained from mathematical operations. For instance, in a nn.Linear(in, out) module, weight and bias are leaf nodes so when you call .backward on a loss functio… | 3 | 2020-06-26T11:08:26.817Z | https://discuss.pytorch.org/t/what-is-the-purpose-of-is-leaf/87000/4 | If you’re going to pass an encoder_hidden to your decoder you don’t even need the initHidden method. Your gru will automatically set the initial hidden state to zero, process the whole sequence and pop out an output and hidden_state.
There are a few ways you can pass these to a decoder. The easiest… We need graph leaves to be able to compute gradients of final tensor w.r.t. them. Leaf nodes are not functions or simply put, have not been obtained from mathematical operations. For instance, in a nn.Linear(in, out) module, weight and bias are leaf nodes so when you call .backward on a loss functio… Hi,
This warning only means that you are accessing the .grad field of a Tensor for which pytorch will never populate the .grad field.
You can run your code with python -W error your_script.py to make python error out when the warning happens and so show you where it happens exactly.
The gist of t… | 1,058 | {'text': ['We need graph leaves to be able to compute gradients of final tensor w.r.t. them. Leaf nodes are not functions or simply put, have not been obtained from mathematical operations. For instance, in a nn.Linear(in, out) module, weight and bias are leaf nodes so when you call .backward on a loss functio…'], 'answer_start': [1058]} |
.grad attribute of a non-leaf tensor being accessed | Hi there, im a newbie at pytorch.
I am running into the warning: “UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won’t be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on… | 0 | 2020-05-21T12:42:07.288Z | Hi,
This warning only means that you are accessing the .grad field of a Tensor for which pytorch will never populate the .grad field.
You can run your code with python -W error your_script.py to make python error out when the warning happens and so show you where it happens exactly.
The gist of t… | 14 | 2020-05-21T14:55:49.104Z | https://discuss.pytorch.org/t/grad-attribute-of-a-non-leaf-tensor-being-accessed/82313/2 | If you’re going to pass an encoder_hidden to your decoder you don’t even need the initHidden method. Your gru will automatically set the initial hidden state to zero, process the whole sequence and pop out an output and hidden_state.
There are a few ways you can pass these to a decoder. The easiest… We need graph leaves to be able to compute gradients of final tensor w.r.t. them. Leaf nodes are not functions or simply put, have not been obtained from mathematical operations. For instance, in a nn.Linear(in, out) module, weight and bias are leaf nodes so when you call .backward on a loss functio… Hi,
This warning only means that you are accessing the .grad field of a Tensor for which pytorch will never populate the .grad field.
You can run your code with python -W error your_script.py to make python error out when the warning happens and so show you where it happens exactly.
The gist of t… | 618 | {'text': ['Hi,\n\nThis warning only means that you are accessing the .grad field of a Tensor for which pytorch will never populate the .grad field.\n\nYou can run your code with python -W error your_script.py to make python error out when the warning happens and so show you where it happens exactly.\n\nThe gist of t…'], 'answer_start': [618]} |
Error on torch.load() (PytorchStreamReader failed) | Hi,
I was trying to load the pytorch model but facing an unexpected error. I do not disturb the folder structure and still getting this error.
>>> torch.load("outputs/test_validation_loss_logging/model_001650.pth", map_location="cpu")
Traceback (most recent call last):
File "<stdin>", line 1, in… | 0 | 2020-09-03T19:58:43.941Z | Ok, I’m able to load the model. The problem was with the saved weight file. It wasn’t saved properly and the weight file size was smaller (only 90 MB instead of 200 MB). | 1 | 2020-09-09T14:13:28.578Z | https://discuss.pytorch.org/t/error-on-torch-load-pytorchstreamreader-failed/95103/4 | Ok, I’m able to load the model. The problem was with the saved weight file. It wasn’t saved properly and the weight file size was smaller (only 90 MB instead of 200 MB). I think I found the reason show it works despite the wrong flattening. You’re last two lines in your forward() method are:
def forward(self, x, hs):
...
out = self.fc(out)
return out[-1], hs
The out[-1] resolves the “artificial” batch of 12 values so you have only 1 output value. If yo… If you concatenate the images, you’ll get “less” samples, so I’m not sure how you would like to keep the batch size as 6.
Could you explain your use case a bit?
Since you are now dealing with multi-hot encoded targets (i.e. multi-label classification), you could use nn.BCELoss or nn.BCEWithLogitsL… | 1,852 | {'text': ['Ok, I’m able to load the model. The problem was with the saved weight file. It wasn’t saved properly and the weight file size was smaller (only 90 MB instead of 200 MB).'], 'answer_start': [1852]} |
Please help: LSTM input/output dimensions | I am hopelessly lost trying to understand the shape of data coming in and out of an LSTM.
Most attempts to explain the data flow involve using randomly generated data with no real meaning, which is incredibly unhelpful.
Those examples that use real data, like this <a href="https://github.com/udacity/deep-learning-v2-pytorch/blob/master/recurrent-neural-networks/char-rnn/Character_Level_RNN_Solution.ipynb" rel="nofollow noopener">Udacity notebook</a> on the topic do … | 0 | 2020-07-15T17:10:40.701Z | I think I found the reason show it works despite the wrong flattening. You’re last two lines in your forward() method are:
def forward(self, x, hs):
...
out = self.fc(out)
return out[-1], hs
The out[-1] resolves the “artificial” batch of 12 values so you have only 1 output value. If yo… | 2 | 2020-07-17T01:54:52.095Z | https://discuss.pytorch.org/t/please-help-lstm-input-output-dimensions/89353/8 | Ok, I’m able to load the model. The problem was with the saved weight file. It wasn’t saved properly and the weight file size was smaller (only 90 MB instead of 200 MB). I think I found the reason show it works despite the wrong flattening. You’re last two lines in your forward() method are:
def forward(self, x, hs):
...
out = self.fc(out)
return out[-1], hs
The out[-1] resolves the “artificial” batch of 12 values so you have only 1 output value. If yo… If you concatenate the images, you’ll get “less” samples, so I’m not sure how you would like to keep the batch size as 6.
Could you explain your use case a bit?
Since you are now dealing with multi-hot encoded targets (i.e. multi-label classification), you could use nn.BCELoss or nn.BCEWithLogitsL… | 1,096 | {'text': ['I think I found the reason show it works despite the wrong flattening. You’re last two lines in your forward() method are:\n\ndef forward(self, x, hs):\n\n...\n\nout = self.fc(out)\n\nreturn out[-1], hs\n\nThe out[-1] resolves the “artificial” batch of 12 values so you have only 1 output value. If yo…'], 'answer_start': [1096]} |
Concatenating images | I want to write a code in by Pytorch that concatenate two images (32 by 32), in the way the output image becomes (64 by 32), how should I do that?
Thank you~ | 1 | 2019-03-26T22:48:50.017Z | If you concatenate the images, you’ll get “less” samples, so I’m not sure how you would like to keep the batch size as 6.
Could you explain your use case a bit?
Since you are now dealing with multi-hot encoded targets (i.e. multi-label classification), you could use nn.BCELoss or nn.BCEWithLogitsL… | 0 | 2019-03-27T12:25:51.507Z | https://discuss.pytorch.org/t/concatenating-images/40961/15 | Ok, I’m able to load the model. The problem was with the saved weight file. It wasn’t saved properly and the weight file size was smaller (only 90 MB instead of 200 MB). I think I found the reason show it works despite the wrong flattening. You’re last two lines in your forward() method are:
def forward(self, x, hs):
...
out = self.fc(out)
return out[-1], hs
The out[-1] resolves the “artificial” batch of 12 values so you have only 1 output value. If yo… If you concatenate the images, you’ll get “less” samples, so I’m not sure how you would like to keep the batch size as 6.
Could you explain your use case a bit?
Since you are now dealing with multi-hot encoded targets (i.e. multi-label classification), you could use nn.BCELoss or nn.BCEWithLogitsL… | 470 | {'text': ['If you concatenate the images, you’ll get “less” samples, so I’m not sure how you would like to keep the batch size as 6.\n\nCould you explain your use case a bit?\n\nSince you are now dealing with multi-hot encoded targets (i.e. multi-label classification), you could use nn.BCELoss or nn.BCEWithLogitsL…'], 'answer_start': [470]} |
About Synchronize Batch Norm across Multi-GPU Implementation | I want to implement synchronize batch norm across multi-GPU. How can I do it? I think I should synchronize its mean and variance both forward and backward pass, so can I use the register_hook ? Can someone give me some advise? Thank you. | 2 | 2017-07-23T13:40:49.864Z | A brief description of implementing synchronize BN:
<a href="http://hangzh.com/SynchronizeBN/" target="_blank" rel="nofollow noopener">Implementing Synchronized Multi-GPU Batch Normalization, Do It Exactly Right</a>
Hang Zhang, Rutgers University, Computer Vision – :white_check_mark: Please checkout the new post. | 5 | 2017-08-09T21:32:02.158Z | https://discuss.pytorch.org/t/about-synchronize-batch-norm-across-multi-gpu-implementation/5129/5 | A brief description of implementing synchronize BN:
<a href="http://hangzh.com/SynchronizeBN/" target="_blank" rel="nofollow noopener">Implementing Synchronized Multi-GPU Batch Normalization, Do It Exactly Right</a>
Hang Zhang, Rutgers University, Computer Vision – :white_check_mark: Please checkout the new post. Inception-v3 needs an input shape of [batch_size, 3, 299, 299] instead of [..., 224, 224].
You could up-/resample your images to the needed size and try it again. Setting environment variable LD_PRELOAD with the aim of loading jemalloc instead of default CPU allocator solved the problem.
My launch is as follows:
LD_PRELOAD=./libjemalloc.so.1 python3 app.py.
Related links:
<a href="https://discuss.pytorch.org/t/pytorch-cpu-memory-usage/94380/5">Related problem</a>
<a href="https://zapier.com/engineering/celery-python-jemalloc/" rel="noopener nofollow ugc">Decreasing RAM Usage by 40% Using jemalloc with Python & Celery</a> | 1,556 | {'text': ['A brief description of implementing synchronize BN:\n\n<a href="http://hangzh.com/SynchronizeBN/" target="_blank" rel="nofollow noopener">Implementing Synchronized Multi-GPU Batch Normalization, Do It Exactly Right</a>\n\nHang Zhang, Rutgers University, Computer Vision – :white_check_mark: Please checkout the new post.'], 'answer_start': [1556]} |
Error in training inception-v3 | I have a model that was written using models from torchvision and I wanna test the performance with inception-v3. However, with the same model structure and imput images (size 224 x 224), I got the following error.
RuntimeError: Calculated padded input size per channel: (3 x 3). Kernel size: (5 x 5… | 1 | 2018-08-26T22:44:11.060Z | Inception-v3 needs an input shape of [batch_size, 3, 299, 299] instead of [..., 224, 224].
You could up-/resample your images to the needed size and try it again. | 6 | 2018-08-26T22:46:04.082Z | https://discuss.pytorch.org/t/error-in-training-inception-v3/23933/2 | A brief description of implementing synchronize BN:
<a href="http://hangzh.com/SynchronizeBN/" target="_blank" rel="nofollow noopener">Implementing Synchronized Multi-GPU Batch Normalization, Do It Exactly Right</a>
Hang Zhang, Rutgers University, Computer Vision – :white_check_mark: Please checkout the new post. Inception-v3 needs an input shape of [batch_size, 3, 299, 299] instead of [..., 224, 224].
You could up-/resample your images to the needed size and try it again. Setting environment variable LD_PRELOAD with the aim of loading jemalloc instead of default CPU allocator solved the problem.
My launch is as follows:
LD_PRELOAD=./libjemalloc.so.1 python3 app.py.
Related links:
<a href="https://discuss.pytorch.org/t/pytorch-cpu-memory-usage/94380/5">Related problem</a>
<a href="https://zapier.com/engineering/celery-python-jemalloc/" rel="noopener nofollow ugc">Decreasing RAM Usage by 40% Using jemalloc with Python & Celery</a> | 1,095 | {'text': ['Inception-v3 needs an input shape of [batch_size, 3, 299, 299] instead of [..., 224, 224].\n\nYou could up-/resample your images to the needed size and try it again.'], 'answer_start': [1095]} |
Memory leaks at inference | I’m trying to run my model with Flask but I bumped into high memory consumption and eventually shutting down of server.
I started to profile my app to find a place with huge memory allocation and found it in model inference (if I comment my network inference then there’s no problems with a memory). … | 3 | 2020-06-11T21:10:35.137Z | Setting environment variable LD_PRELOAD with the aim of loading jemalloc instead of default CPU allocator solved the problem.
My launch is as follows:
LD_PRELOAD=./libjemalloc.so.1 python3 app.py.
Related links:
<a href="https://discuss.pytorch.org/t/pytorch-cpu-memory-usage/94380/5">Related problem</a>
<a href="https://zapier.com/engineering/celery-python-jemalloc/" rel="noopener nofollow ugc">Decreasing RAM Usage by 40% Using jemalloc with Python & Celery</a> | 3 | 2020-09-30T22:42:17.967Z | https://discuss.pytorch.org/t/memory-leaks-at-inference/85108/14 | A brief description of implementing synchronize BN:
<a href="http://hangzh.com/SynchronizeBN/" target="_blank" rel="nofollow noopener">Implementing Synchronized Multi-GPU Batch Normalization, Do It Exactly Right</a>
Hang Zhang, Rutgers University, Computer Vision – :white_check_mark: Please checkout the new post. Inception-v3 needs an input shape of [batch_size, 3, 299, 299] instead of [..., 224, 224].
You could up-/resample your images to the needed size and try it again. Setting environment variable LD_PRELOAD with the aim of loading jemalloc instead of default CPU allocator solved the problem.
My launch is as follows:
LD_PRELOAD=./libjemalloc.so.1 python3 app.py.
Related links:
<a href="https://discuss.pytorch.org/t/pytorch-cpu-memory-usage/94380/5">Related problem</a>
<a href="https://zapier.com/engineering/celery-python-jemalloc/" rel="noopener nofollow ugc">Decreasing RAM Usage by 40% Using jemalloc with Python & Celery</a> | 481 | {'text': ['Setting environment variable LD_PRELOAD with the aim of loading jemalloc instead of default CPU allocator solved the problem.\n\nMy launch is as follows:\n\nLD_PRELOAD=./libjemalloc.so.1 python3 app.py.\n\nRelated links:\n\n<a href="https://discuss.pytorch.org/t/pytorch-cpu-memory-usage/94380/5">Related problem</a>\n\n<a href="https://zapier.com/engineering/celery-python-jemalloc/" rel="noopener nofollow ugc">Decreasing RAM Usage by 40% Using jemalloc with Python & Celery</a>'], 'answer_start': [481]} |
RuntimeError: shape '[-1, 400]' is invalid for input of size | In order to classify images with pytorch, I modified my local data while using ImageFolder based on the following URL
<a href="https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html" class="onebox" target="_blank" rel="nofollow noopener">https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html</a>
However,”RuntimeError: shape ‘[-1, 400]’ is invalid for input of size” is displayed and I do not know the cause.
… | 0 | 2018-12-29T13:16:49.075Z | It seems taht the offset and v_length calculation combined with the slicing of theta is wrong.
As you can see here:
torch.randn([10])[10:20].view(48)
> RuntimeError: shape '[48]' is invalid for input of size 0
you are most likely creating an empty tensor in theta[offset: offset+v_length] while th… | 1 | 2021-10-24T21:53:25.314Z | https://discuss.pytorch.org/t/runtimeerror-shape-1-400-is-invalid-for-input-of-size/33354/12 | It seems taht the offset and v_length calculation combined with the slicing of theta is wrong.
As you can see here:
torch.randn([10])[10:20].view(48)
> RuntimeError: shape '[48]' is invalid for input of size 0
you are most likely creating an empty tensor in theta[offset: offset+v_length] while th… <a class="mention" href="/u/mariosoreo">@MariosOreo</a> Thanks for the catch! I’ve fixed it in my post. :wink:
<a class="mention" href="/u/deb_prakash_chatterj">@Deb_Prakash_Chatterj</a> You could count it manually of create a confusion matrix first.
Based on the confusion matrix you could then calculate the stats.
Here is a small example. I tried to validate the results, but you should defin… You have to call it on your model:
model.load_state_dict(torch.load(...)) | 1,910 | {'text': ['It seems taht the offset and v_length calculation combined with the slicing of theta is wrong.\n\nAs you can see here:\n\ntorch.randn([10])[10:20].view(48)\n\n> RuntimeError: shape '[48]' is invalid for input of size 0\n\nyou are most likely creating an empty tensor in theta[offset: offset+v_length] while th…'], 'answer_start': [1910]} |
How to get the sensitivity and specificity of a dataset? | Guys, I am making a classifier using ResNet and I want to get the Sensitivity and specificity of the particular dataset. Right now I have accuracy, Train loss, and test loss. I have already studied from Wikipedia and YouTube, about True positive/negative, false negative/positive and know the formula… | 1 | 2019-03-09T19:57:03.548Z | <a class="mention" href="/u/mariosoreo">@MariosOreo</a> Thanks for the catch! I’ve fixed it in my post. :wink:
<a class="mention" href="/u/deb_prakash_chatterj">@Deb_Prakash_Chatterj</a> You could count it manually of create a confusion matrix first.
Based on the confusion matrix you could then calculate the stats.
Here is a small example. I tried to validate the results, but you should defin… | 6 | 2019-03-10T14:09:32.210Z | https://discuss.pytorch.org/t/how-to-get-the-sensitivity-and-specificity-of-a-dataset/39373/6 | It seems taht the offset and v_length calculation combined with the slicing of theta is wrong.
As you can see here:
torch.randn([10])[10:20].view(48)
> RuntimeError: shape '[48]' is invalid for input of size 0
you are most likely creating an empty tensor in theta[offset: offset+v_length] while th… <a class="mention" href="/u/mariosoreo">@MariosOreo</a> Thanks for the catch! I’ve fixed it in my post. :wink:
<a class="mention" href="/u/deb_prakash_chatterj">@Deb_Prakash_Chatterj</a> You could count it manually of create a confusion matrix first.
Based on the confusion matrix you could then calculate the stats.
Here is a small example. I tried to validate the results, but you should defin… You have to call it on your model:
model.load_state_dict(torch.load(...)) | 1,276 | {'text': ['<a class="mention" href="/u/mariosoreo">@MariosOreo</a> Thanks for the catch! I’ve fixed it in my post. :wink:\n\n<a class="mention" href="/u/deb_prakash_chatterj">@Deb_Prakash_Chatterj</a> You could count it manually of create a confusion matrix first.\n\nBased on the confusion matrix you could then calculate the stats.\n\nHere is a small example. I tried to validate the results, but you should defin…'], 'answer_start': [1276]} |
Torch has not attribute load_state_dict? | Hi. I am trying to load a model with:
import torch
import pyautogui as mouse
import cv2
from ScreenRecorder import Record,IniRecord,Frame
def start(model):
sc_ini = Frame()
monitor = sc_ini.get()
sc = IniRecord(monitor,1.6)
while True:
frame = sc.getFrame()
cv2… | 1 | 2018-07-26T19:29:58.920Z | You have to call it on your model:
model.load_state_dict(torch.load(...)) | 0 | 2018-07-26T19:31:49.003Z | https://discuss.pytorch.org/t/torch-has-not-attribute-load-state-dict/21781/2 | It seems taht the offset and v_length calculation combined with the slicing of theta is wrong.
As you can see here:
torch.randn([10])[10:20].view(48)
> RuntimeError: shape '[48]' is invalid for input of size 0
you are most likely creating an empty tensor in theta[offset: offset+v_length] while th… <a class="mention" href="/u/mariosoreo">@MariosOreo</a> Thanks for the catch! I’ve fixed it in my post. :wink:
<a class="mention" href="/u/deb_prakash_chatterj">@Deb_Prakash_Chatterj</a> You could count it manually of create a confusion matrix first.
Based on the confusion matrix you could then calculate the stats.
Here is a small example. I tried to validate the results, but you should defin… You have to call it on your model:
model.load_state_dict(torch.load(...)) | 728 | {'text': ['You have to call it on your model:\n\nmodel.load_state_dict(torch.load(...))'], 'answer_start': [728]} |
Performing mini-batch gradient descent or stochastic gradient descent on a mini-batch | Hello, I have created a data-loader object, I set the parameter batch size equal to five and I run the following code. I would like some clarification, is the following code performing mini-batch gradient descent or stochastic gradient descent on a mini-batch.
from torch import nn
import torch
impo… | 1 | 2018-07-16T19:01:46.787Z | In your current code snippet you are assigning x to your complete dataset, i.e. you are performing batch gradient descent.
In the former code your DataLoader provided batches of size 5, so you used mini-batch gradient descent.
If you use a dataloader with batch_size=1 or slice each sample one by o… | 1 | 2018-07-17T06:43:53.259Z | https://discuss.pytorch.org/t/performing-mini-batch-gradient-descent-or-stochastic-gradient-descent-on-a-mini-batch/21235/4 | In your current code snippet you are assigning x to your complete dataset, i.e. you are performing batch gradient descent.
In the former code your DataLoader provided batches of size 5, so you used mini-batch gradient descent.
If you use a dataloader with batch_size=1 or slice each sample one by o… When it says the input sizes for your network must be same, it means that the images that you input to your model say Resnet should be of same size at every iteration for maximum performance.
When you enable cudnn benchmark, what it does is, before beginning the training of your model it optimizes … No, the manual seed is not the issue. I’ve just used it in my first example to show, that the optimizer does not have any problems optimizing a model with unused parameters.
Even if we copy all parameters between models, the optimizer works identically.
So back to your original question. The discr… | 1,604 | {'text': ['In your current code snippet you are assigning x to your complete dataset, i.e. you are performing batch gradient descent.\n\nIn the former code your DataLoader provided batches of size 5, so you used mini-batch gradient descent.\n\nIf you use a dataloader with batch_size=1 or slice each sample one by o…'], 'answer_start': [1604]} |
Can you use torch.backends.cudnn.benchmark = True after resizing images? | The thread at <a href="https://discuss.pytorch.org/t/what-does-torch-backends-cudnn-benchmark-do/5936/2" class="inline-onebox">What does torch.backends.cudnn.benchmark do?</a> says that you can set torch.backends.cudnn.benchmark = True if your input sizes for your network don’t vary.
So is this fine to enable if I resize my images to be the same size in the dataloader at every iteration, or is this considered hav… | 1 | 2019-03-22T19:48:39.971Z | When it says the input sizes for your network must be same, it means that the images that you input to your model say Resnet should be of same size at every iteration for maximum performance.
When you enable cudnn benchmark, what it does is, before beginning the training of your model it optimizes … | 8 | 2019-03-22T20:22:49.413Z | https://discuss.pytorch.org/t/can-you-use-torch-backends-cudnn-benchmark-true-after-resizing-images/40659/2 | In your current code snippet you are assigning x to your complete dataset, i.e. you are performing batch gradient descent.
In the former code your DataLoader provided batches of size 5, so you used mini-batch gradient descent.
If you use a dataloader with batch_size=1 or slice each sample one by o… When it says the input sizes for your network must be same, it means that the images that you input to your model say Resnet should be of same size at every iteration for maximum performance.
When you enable cudnn benchmark, what it does is, before beginning the training of your model it optimizes … No, the manual seed is not the issue. I’ve just used it in my first example to show, that the optimizer does not have any problems optimizing a model with unused parameters.
Even if we copy all parameters between models, the optimizer works identically.
So back to your original question. The discr… | 1,111 | {'text': ['When it says the input sizes for your network must be same, it means that the images that you input to your model say Resnet should be of same size at every iteration for maximum performance.\n\nWhen you enable cudnn benchmark, what it does is, before beginning the training of your model it optimizes …'], 'answer_start': [1111]} |
Subsets and Splits