name
stringlengths 15
255
| question
stringlengths 20
1.77k
| questionUpvotes
int64 0
23
| timeCreated
stringlengths 24
24
| answer
stringlengths 9
1.09k
| answerUpvotes
int64 0
75
| timeAnswered
stringlengths 24
24
| answerURL
stringlengths 50
285
| context
stringlengths 244
1.73k
| answer_start
int64 0
3.45k
| answers
stringlengths 46
1.14k
|
---|---|---|---|---|---|---|---|---|---|---|
Custom loss functions | Hi,
I’m implementing a custom loss function in Pytorch 0.4. Reading the docs and the forums, it seems that there are two ways to define a custom loss function:
Extending Function and implementing forward and backward methods.
Extending Module and implementing only the forward method.
With that i… | 8 | 2018-11-12T13:44:03.348Z | Sure, as long as you use PyTorch operations, you should be fine.
Here is a dummy implementation of nn.MSELoss using the mean:
def my_loss(output, target):
loss = torch.mean((output - target)**2)
return loss
model = nn.Linear(2, 2)
x = torch.randn(1, 2)
target = torch.randn(1, 2)
output = … | 54 | 2018-11-12T13:51:02.656Z | https://discuss.pytorch.org/t/custom-loss-functions/29387/2 | Sure, as long as you use PyTorch operations, you should be fine.
Here is a dummy implementation of nn.MSELoss using the mean:
def my_loss(output, target):
loss = torch.mean((output - target)**2)
return loss
model = nn.Linear(2, 2)
x = torch.randn(1, 2)
target = torch.randn(1, 2)
output = … You can just use a plot library like matplotlib to visualize the output.
Sure! You could use some loss function like nn.BCELoss as your criterion to reconstruct the images.
Forward hooks are a good choice to get the activation map for a certain input.
Here is a small code example as a sta… log_softmax applies logarithm after softmax.
softmax:
exp(x_i) / exp(x).sum()
log_softmax:
log( exp(x_i) / exp(x).sum() )
log_softmax essential does log(softmax(x)), but the practical implementation is different and more efficient while doing the same operation. You might want to have a look at… | 0 | {'text': ['Sure, as long as you use PyTorch operations, you should be fine.\n\nHere is a dummy implementation of nn.MSELoss using the mean:\n\ndef my_loss(output, target):\n\nloss = torch.mean((output - target)**2)\n\nreturn loss\n\nmodel = nn.Linear(2, 2)\n\nx = torch.randn(1, 2)\n\ntarget = torch.randn(1, 2)\n\noutput = …'], 'answer_start': [0]} |
Visualize feature map | Hi, all.
I have some questions about the visualization.
I`m newbie in this field…so maybe this is silly questions.
I have MNIST dataset. and I want to visualize the output of my encoder.
(Input: MNIST data) -> MY_ENCODER -> output -> visualization.
How can I visualize the data from output of … | 9 | 2018-11-14T16:33:18.726Z | You can just use a plot library like matplotlib to visualize the output.
Sure! You could use some loss function like nn.BCELoss as your criterion to reconstruct the images.
Forward hooks are a good choice to get the activation map for a certain input.
Here is a small code example as a sta… | 43 | 2018-11-14T20:20:59.735Z | https://discuss.pytorch.org/t/visualize-feature-map/29597/2 | Sure, as long as you use PyTorch operations, you should be fine.
Here is a dummy implementation of nn.MSELoss using the mean:
def my_loss(output, target):
loss = torch.mean((output - target)**2)
return loss
model = nn.Linear(2, 2)
x = torch.randn(1, 2)
target = torch.randn(1, 2)
output = … You can just use a plot library like matplotlib to visualize the output.
Sure! You could use some loss function like nn.BCELoss as your criterion to reconstruct the images.
Forward hooks are a good choice to get the activation map for a certain input.
Here is a small code example as a sta… log_softmax applies logarithm after softmax.
softmax:
exp(x_i) / exp(x).sum()
log_softmax:
log( exp(x_i) / exp(x).sum() )
log_softmax essential does log(softmax(x)), but the practical implementation is different and more efficient while doing the same operation. You might want to have a look at… | 306 | {'text': ['You can just use a plot library like matplotlib to visualize the output.\n\nSure! You could use some loss function like nn.BCELoss as your criterion to reconstruct the images.\n\nForward hooks are a good choice to get the activation map for a certain input.\n\nHere is a small code example as a sta…'], 'answer_start': [306]} |
What is the difference between log_softmax and softmax? | What is the difference between log_softmax and softmax?
How to explain them in mathematics?
Thank you! | 5 | 2018-01-03T09:25:12.017Z | log_softmax applies logarithm after softmax.
softmax:
exp(x_i) / exp(x).sum()
log_softmax:
log( exp(x_i) / exp(x).sum() )
log_softmax essential does log(softmax(x)), but the practical implementation is different and more efficient while doing the same operation. You might want to have a look at… | 22 | 2018-01-03T11:41:47.127Z | https://discuss.pytorch.org/t/what-is-the-difference-between-log-softmax-and-softmax/11801/2 | Sure, as long as you use PyTorch operations, you should be fine.
Here is a dummy implementation of nn.MSELoss using the mean:
def my_loss(output, target):
loss = torch.mean((output - target)**2)
return loss
model = nn.Linear(2, 2)
x = torch.randn(1, 2)
target = torch.randn(1, 2)
output = … You can just use a plot library like matplotlib to visualize the output.
Sure! You could use some loss function like nn.BCELoss as your criterion to reconstruct the images.
Forward hooks are a good choice to get the activation map for a certain input.
Here is a small code example as a sta… log_softmax applies logarithm after softmax.
softmax:
exp(x_i) / exp(x).sum()
log_softmax:
log( exp(x_i) / exp(x).sum() )
log_softmax essential does log(softmax(x)), but the practical implementation is different and more efficient while doing the same operation. You might want to have a look at… | 607 | {'text': ['log_softmax applies logarithm after softmax.\n\nsoftmax:\n\nexp(x_i) / exp(x).sum()\n\nlog_softmax:\n\nlog( exp(x_i) / exp(x).sum() )\n\nlog_softmax essential does log(softmax(x)), but the practical implementation is different and more efficient while doing the same operation. You might want to have a look at…'], 'answer_start': [607]} |
Where does `torch._C` come from? | I am read the code of batch normlization, and I find <a href="https://github.com/pytorch/pytorch/blob/master/torch/nn/functional.py#L454" rel="nofollow noopener">this line</a>:
f = torch._C._functions.BatchNorm(running_mean, running_var, training, momentum, eps, torch.backends.cudnn.enabled)
But I do not find any library called the _C. I do not know where does torch._C.functions.BatchNorm come from. | 9 | 2017-04-19T09:01:34.007Z | For completeness, the _C comes from <a href="https://github.com/pytorch/pytorch/blob/master/torch/csrc/Module.cpp#L732-L742">here</a> | 5 | 2017-04-19T10:37:44.938Z | https://discuss.pytorch.org/t/where-does-torch-c-come-from/2015/3 | For completeness, the _C comes from <a href="https://github.com/pytorch/pytorch/blob/master/torch/csrc/Module.cpp#L732-L742">here</a> I’d recommend creating a new dataset and concatenating the images there, so the copy will be done inside the worker processes:
class ConcatDataset(torch.utils.data.Dataset):
def __init__(self, *datasets):
self.datasets = datasets
def __getitem__(self, i):
return tuple(d[i] … you can get the params via: params = model.state_dict() and then they will be a dictionary whose name will be similar to conv3_1 | 1,830 | {'text': ['For completeness, the _C comes from <a href="https://github.com/pytorch/pytorch/blob/master/torch/csrc/Module.cpp#L732-L742">here</a>'], 'answer_start': [1830]} |
Train simultaneously on two datasets | Hello,
I should train using samples from two different datasets, so I initialize two DataLoaders:
train_loader_A = torch.utils.data.DataLoader(
datasets.ImageFolder(traindir_A),
batch_size=args.batch_size, shuffle=True,
num_workers=args.workers, pin_memory=Tr… | 8 | 2017-02-21T20:33:06.195Z | I’d recommend creating a new dataset and concatenating the images there, so the copy will be done inside the worker processes:
class ConcatDataset(torch.utils.data.Dataset):
def __init__(self, *datasets):
self.datasets = datasets
def __getitem__(self, i):
return tuple(d[i] … | 27 | 2017-02-21T23:24:46.182Z | https://discuss.pytorch.org/t/train-simultaneously-on-two-datasets/649/2 | For completeness, the _C comes from <a href="https://github.com/pytorch/pytorch/blob/master/torch/csrc/Module.cpp#L732-L742">here</a> I’d recommend creating a new dataset and concatenating the images there, so the copy will be done inside the worker processes:
class ConcatDataset(torch.utils.data.Dataset):
def __init__(self, *datasets):
self.datasets = datasets
def __getitem__(self, i):
return tuple(d[i] … you can get the params via: params = model.state_dict() and then they will be a dictionary whose name will be similar to conv3_1 | 1,049 | {'text': ['I’d recommend creating a new dataset and concatenating the images there, so the copy will be done inside the worker processes:\n\nclass ConcatDataset(torch.utils.data.Dataset):\n\ndef __init__(self, *datasets):\n\nself.datasets = datasets\n\ndef __getitem__(self, i):\n\nreturn tuple(d[i] …'], 'answer_start': [1049]} |
How to manipulate layer parameters by it's names? | I have a complicated CNN model that contains many layers, I want to copy some of the layer parameters from external data, such as a numpy array.
So how can I set one specific layer’s parameters by the layer name, say “conv3_3” ?
In pytorch I get the model parameters via:
params = list(model.para… | 5 | 2017-03-23T02:04:15.936Z | you can get the params via: params = model.state_dict() and then they will be a dictionary whose name will be similar to conv3_1 | 23 | 2017-03-23T03:51:17.601Z | https://discuss.pytorch.org/t/how-to-manipulate-layer-parameters-by-its-names/1282/2 | For completeness, the _C comes from <a href="https://github.com/pytorch/pytorch/blob/master/torch/csrc/Module.cpp#L732-L742">here</a> I’d recommend creating a new dataset and concatenating the images there, so the copy will be done inside the worker processes:
class ConcatDataset(torch.utils.data.Dataset):
def __init__(self, *datasets):
self.datasets = datasets
def __getitem__(self, i):
return tuple(d[i] … you can get the params via: params = model.state_dict() and then they will be a dictionary whose name will be similar to conv3_1 | 422 | {'text': ['you can get the params via: params = model.state_dict() and then they will be a dictionary whose name will be similar to conv3_1'], 'answer_start': [422]} |
How to measure time in PyTorch | I have seen lots of ways to measure time in PyTorch. But what is the most proper way to do it now (both for cpu and cuda)?
Should I clear the memory cache if I use timeit?
And is it possible to get accurate results if I’m computing on a cluster? And is it a way to make this results reproducible?
… | 5 | 2018-10-10T13:17:59.887Z | I’ve tried on colab but find
t0 = time.time()
outputs = net(x)
torch.cuda.current_stream().synchronize()
t1 = time.time()
gives a more accurate measurement… | 2 | 2020-08-09T09:19:29.052Z | https://discuss.pytorch.org/t/how-to-measure-time-in-pytorch/26964/13 | I’ve tried on colab but find
t0 = time.time()
outputs = net(x)
torch.cuda.current_stream().synchronize()
t1 = time.time()
gives a more accurate measurement… torch.bmm(A.view(6, 1, 256), B.view(6, 256, 1)) should do the trick!
<a href="http://pytorch.org/docs/0.2.0/torch.html#torch.bmm" class="onebox" target="_blank">http://pytorch.org/docs/0.2.0/torch.html#torch.bmm</a> You can try this:
for name, param in model.named_parameters():
if param.requires_grad:
print name, param.data | 1,100 | {'text': ['I’ve tried on colab but find\n\nt0 = time.time()\n\noutputs = net(x)\n\ntorch.cuda.current_stream().synchronize()\n\nt1 = time.time()\n\ngives a more accurate measurement…'], 'answer_start': [1100]} |
Dot product batch-wise | I have two matrices of dimension (6, 256). I would like to calculate the dot product row-wise so that the dimensions of the resulting matrix would be (6 x 1). torch.dot does not support batch-wise calculation. Any efficient way to do this? | 5 | 2017-11-09T20:26:47.611Z | torch.bmm(A.view(6, 1, 256), B.view(6, 256, 1)) should do the trick!
<a href="http://pytorch.org/docs/0.2.0/torch.html#torch.bmm" class="onebox" target="_blank">http://pytorch.org/docs/0.2.0/torch.html#torch.bmm</a> | 22 | 2017-11-09T20:38:51.511Z | https://discuss.pytorch.org/t/dot-product-batch-wise/9746/3 | I’ve tried on colab but find
t0 = time.time()
outputs = net(x)
torch.cuda.current_stream().synchronize()
t1 = time.time()
gives a more accurate measurement… torch.bmm(A.view(6, 1, 256), B.view(6, 256, 1)) should do the trick!
<a href="http://pytorch.org/docs/0.2.0/torch.html#torch.bmm" class="onebox" target="_blank">http://pytorch.org/docs/0.2.0/torch.html#torch.bmm</a> You can try this:
for name, param in model.named_parameters():
if param.requires_grad:
print name, param.data | 712 | {'text': ['torch.bmm(A.view(6, 1, 256), B.view(6, 256, 1)) should do the trick!\n\n<a href="http://pytorch.org/docs/0.2.0/torch.html#torch.bmm" class="onebox" target="_blank">http://pytorch.org/docs/0.2.0/torch.html#torch.bmm</a>'], 'answer_start': [712]} |
How to print model's parameters with its name and `requires_grad value`? | I want to print model’s parameters with its name. I found two ways to print summary. But I want to use both requires_grad and name at same for loop. Can I do this? I want to check gradients during the training.
for p in model.parameters():
# p.requires_grad: bool
# p.data: Tensor
for name,… | 23 | 2017-12-05T02:13:23.215Z | You can try this:
for name, param in model.named_parameters():
if param.requires_grad:
print name, param.data | 75 | 2017-12-05T03:04:41.921Z | https://discuss.pytorch.org/t/how-to-print-models-parameters-with-its-name-and-requires-grad-value/10778/2 | I’ve tried on colab but find
t0 = time.time()
outputs = net(x)
torch.cuda.current_stream().synchronize()
t1 = time.time()
gives a more accurate measurement… torch.bmm(A.view(6, 1, 256), B.view(6, 256, 1)) should do the trick!
<a href="http://pytorch.org/docs/0.2.0/torch.html#torch.bmm" class="onebox" target="_blank">http://pytorch.org/docs/0.2.0/torch.html#torch.bmm</a> You can try this:
for name, param in model.named_parameters():
if param.requires_grad:
print name, param.data | 379 | {'text': ['You can try this:\n\nfor name, param in model.named_parameters():\n\nif param.requires_grad:\n\nprint name, param.data'], 'answer_start': [379]} |
What we should use align_corners = False | I am very confused with this parameter in pytroch document. According to wiki
<a href="https://en.wikipedia.org/wiki/Bilinear_interpolation" rel="nofollow noopener">https://en.wikipedia.org/wiki/Bilinear_interpolation</a>, the bilinear interpolation formula result is consistent with
align_corners =True. which is defatult before pytorch 0.4.0.
I want to know when should use align_corner… | 5 | 2018-08-08T12:51:54.297Z | I will show you a 1-dimension example.
Suppose that you want to resize tensor [0, 1] to [?, ?, ?, ?], so the factor=2.
Now we only care about coordinates.
For mode=‘bilinear’ and align_corners=False, the result is the same with opencv and other popular image processing libraries (I guess). Corres… | 42 | 2019-01-03T13:30:11.225Z | https://discuss.pytorch.org/t/what-we-should-use-align-corners-false/22663/5 | I will show you a 1-dimension example.
Suppose that you want to resize tensor [0, 1] to [?, ?, ?, ?], so the factor=2.
Now we only care about coordinates.
For mode=‘bilinear’ and align_corners=False, the result is the same with opencv and other popular image processing libraries (I guess). Corres… It sounds like the problem is that your xs_h don’t have requires_grad=True. Have you tried creating Variables with requires_grad=True? Yes, you can get the gradient for each weight in the model w.r.t that weight. Just like this:
print(net.conv11.weight.grad)
print(net.conv21.bias.grad)
The reason you do loss.grad it gives you None is that “loss” is not in optimizer, however, the “net.parameters()” in optimizer.
optimizer = opti… | 982 | {'text': ['I will show you a 1-dimension example.\n\nSuppose that you want to resize tensor [0, 1] to [?, ?, ?, ?], so the factor=2.\n\nNow we only care about coordinates.\n\nFor mode=‘bilinear’ and align_corners=False, the result is the same with opencv and other popular image processing libraries (I guess). Corres…'], 'answer_start': [982]} |
RuntimeError: element 0 of variables does not require grad and does not have a grad_fn | hi, i have a problem here, i got a sequence of Variables which are the outputs of the bi-directional RNN, and i stacked them into a matrix xs_h whose dimension is (seq_length, batch_size, hidden_size), them i want to update the matrix xs_h by convoluting on two slices in xs_h, some codes are as foll… | 5 | 2017-12-12T18:18:32.087Z | It sounds like the problem is that your xs_h don’t have requires_grad=True. Have you tried creating Variables with requires_grad=True? | 7 | 2017-12-12T18:30:49.106Z | https://discuss.pytorch.org/t/runtimeerror-element-0-of-variables-does-not-require-grad-and-does-not-have-a-grad-fn/11074/2 | I will show you a 1-dimension example.
Suppose that you want to resize tensor [0, 1] to [?, ?, ?, ?], so the factor=2.
Now we only care about coordinates.
For mode=‘bilinear’ and align_corners=False, the result is the same with opencv and other popular image processing libraries (I guess). Corres… It sounds like the problem is that your xs_h don’t have requires_grad=True. Have you tried creating Variables with requires_grad=True? Yes, you can get the gradient for each weight in the model w.r.t that weight. Just like this:
print(net.conv11.weight.grad)
print(net.conv21.bias.grad)
The reason you do loss.grad it gives you None is that “loss” is not in optimizer, however, the “net.parameters()” in optimizer.
optimizer = opti… | 800 | {'text': ['It sounds like the problem is that your xs_h don’t have requires_grad=True. Have you tried creating Variables with requires_grad=True?'], 'answer_start': [800]} |
How to print the computed gradient values for a network | I want to print the gradient values before and after doing back propagation, but i have no idea how to do it.
if i do loss.grad it gives me None.
can i get the gradient for each weight in the model (with respect to that weight)?
sample code:
import torch
import torch.nn as nn
import torch.nn.fun… | 3 | 2019-01-08T22:29:23.434Z | Yes, you can get the gradient for each weight in the model w.r.t that weight. Just like this:
print(net.conv11.weight.grad)
print(net.conv21.bias.grad)
The reason you do loss.grad it gives you None is that “loss” is not in optimizer, however, the “net.parameters()” in optimizer.
optimizer = opti… | 21 | 2019-01-10T06:45:51.666Z | https://discuss.pytorch.org/t/how-to-print-the-computed-gradient-values-for-a-network/34179/4 | I will show you a 1-dimension example.
Suppose that you want to resize tensor [0, 1] to [?, ?, ?, ?], so the factor=2.
Now we only care about coordinates.
For mode=‘bilinear’ and align_corners=False, the result is the same with opencv and other popular image processing libraries (I guess). Corres… It sounds like the problem is that your xs_h don’t have requires_grad=True. Have you tried creating Variables with requires_grad=True? Yes, you can get the gradient for each weight in the model w.r.t that weight. Just like this:
print(net.conv11.weight.grad)
print(net.conv21.bias.grad)
The reason you do loss.grad it gives you None is that “loss” is not in optimizer, however, the “net.parameters()” in optimizer.
optimizer = opti… | 444 | {'text': ['Yes, you can get the gradient for each weight in the model w.r.t that weight. Just like this:\n\nprint(net.conv11.weight.grad)\n\nprint(net.conv21.bias.grad)\n\nThe reason you do loss.grad it gives you None is that “loss” is not in optimizer, however, the “net.parameters()” in optimizer.\n\noptimizer = opti…'], 'answer_start': [444]} |
About Normalization using pre-trained vgg16 networks | I am trying to use the given vgg16 network to extract features (not fine-tuning) for my own task dataset,such as UCF101, rather than Imagenet. Since vgg16 is trained on ImageNet, for image normalization, I see a lot of people just use the mean and std statistics calculated for ImageNet (mean=[0.485… | 6 | 2018-08-21T05:59:03.923Z | This should work:
class MyDataset(Dataset):
def __init__(self):
self.data = torch.randn(100, 3, 24, 24)
def __getitem__(self, index):
x = self.data[index]
return x
def __len__(self):
return len(self.data)
dataset = MyDataset()
loader = Dat… | 52 | 2018-08-21T11:05:19.890Z | https://discuss.pytorch.org/t/about-normalization-using-pre-trained-vgg16-networks/23560/6 | This should work:
class MyDataset(Dataset):
def __init__(self):
self.data = torch.randn(100, 3, 24, 24)
def __getitem__(self, index):
x = self.data[index]
return x
def __len__(self):
return len(self.data)
dataset = MyDataset()
loader = Dat… When you use .data, you get a new Tensor with requires_grad=False, so cloning it won’t involve autograd. So both are equivalent, but there might be a (small) speed difference, I am not sure about that.
Another use case could is when you want to clone/copy a non-parameter Tensor without autograd. Yo… You could use this code snippet to transform your class indices into a one-hot encoded target:
target = torch.randint(0, 10, (10,))
one_hot = torch.nn.functional.one_hot(target) | 1,504 | {'text': ['This should work:\n\nclass MyDataset(Dataset):\n\ndef __init__(self):\n\nself.data = torch.randn(100, 3, 24, 24)\n\ndef __getitem__(self, index):\n\nx = self.data[index]\n\nreturn x\n\ndef __len__(self):\n\nreturn len(self.data)\n\ndataset = MyDataset()\n\nloader = Dat…'], 'answer_start': [1504]} |
Copy.deepcopy() vs clone() | when copying modules/tensors around, which one should I use?
are they interchangable?
Thanks a lot | 5 | 2019-09-03T07:33:29.529Z | When you use .data, you get a new Tensor with requires_grad=False, so cloning it won’t involve autograd. So both are equivalent, but there might be a (small) speed difference, I am not sure about that.
Another use case could is when you want to clone/copy a non-parameter Tensor without autograd. Yo… | 7 | 2019-09-03T12:11:05.616Z | https://discuss.pytorch.org/t/copy-deepcopy-vs-clone/55022/4 | This should work:
class MyDataset(Dataset):
def __init__(self):
self.data = torch.randn(100, 3, 24, 24)
def __getitem__(self, index):
x = self.data[index]
return x
def __len__(self):
return len(self.data)
dataset = MyDataset()
loader = Dat… When you use .data, you get a new Tensor with requires_grad=False, so cloning it won’t involve autograd. So both are equivalent, but there might be a (small) speed difference, I am not sure about that.
Another use case could is when you want to clone/copy a non-parameter Tensor without autograd. Yo… You could use this code snippet to transform your class indices into a one-hot encoded target:
target = torch.randint(0, 10, (10,))
one_hot = torch.nn.functional.one_hot(target) | 1,010 | {'text': ['When you use .data, you get a new Tensor with requires_grad=False, so cloning it won’t involve autograd. So both are equivalent, but there might be a (small) speed difference, I am not sure about that.\n\nAnother use case could is when you want to clone/copy a non-parameter Tensor without autograd. Yo…'], 'answer_start': [1010]} |
PyTocrh way for one-hot-encoding multiclass target variable | Hey,
Sorry for maybe super basic question but could not find it.
What is a correct Pytorch way to encode multi-class target variable?
I have > 30 target classes for target variable - like AA, AB, BB, BA, BC ....
Should I use ScikitLearn tools and then convert numpy arrays into torch tensors?
Or… | 4 | 2020-02-01T08:48:53.902Z | You could use this code snippet to transform your class indices into a one-hot encoded target:
target = torch.randint(0, 10, (10,))
one_hot = torch.nn.functional.one_hot(target) | 29 | 2020-02-01T08:55:45.032Z | https://discuss.pytorch.org/t/pytocrh-way-for-one-hot-encoding-multiclass-target-variable/68321/2 | This should work:
class MyDataset(Dataset):
def __init__(self):
self.data = torch.randn(100, 3, 24, 24)
def __getitem__(self, index):
x = self.data[index]
return x
def __len__(self):
return len(self.data)
dataset = MyDataset()
loader = Dat… When you use .data, you get a new Tensor with requires_grad=False, so cloning it won’t involve autograd. So both are equivalent, but there might be a (small) speed difference, I am not sure about that.
Another use case could is when you want to clone/copy a non-parameter Tensor without autograd. Yo… You could use this code snippet to transform your class indices into a one-hot encoded target:
target = torch.randint(0, 10, (10,))
one_hot = torch.nn.functional.one_hot(target) | 567 | {'text': ['You could use this code snippet to transform your class indices into a one-hot encoded target:\n\ntarget = torch.randint(0, 10, (10,))\n\none_hot = torch.nn.functional.one_hot(target)'], 'answer_start': [567]} |
How to tile a tensor? | If I have a tensor like:
z = torch.FloatTensor([[1,2,3],[4,5,6]])
1 2 3
4 5 6
How might I turn it into a tensor like:
1 2 3
1 2 3
1 2 3
1 2 3
4 5 6
4 5 6
4 5 6
4 5 6
I imagine that <a href="http://pytorch.org/docs/master/tensors.html?highlight=repeat#torch.Tensor.repeat" rel="nofollow noopener">torch.repeat()</a> is somehow in play here.
The only solution I have come up with is to do:
z.repeat(1,4).view(-1, 3… | 3 | 2018-02-20T17:20:42.982Z | For the second you can do:
z.view(-1, 1).repeat(1, 3).view(3, 9)
1 1 1 2 2 2 3 3 3
4 4 4 5 5 5 6 6 6
7 7 7 8 8 8 9 9 9
For the first, I don’t think there are operations that combine all of these together. Maxunpool does something similar but doesn’t have the repeat ability. | 9 | 2018-02-20T20:11:52.127Z | https://discuss.pytorch.org/t/how-to-tile-a-tensor/13853/2 | For the second you can do:
z.view(-1, 1).repeat(1, 3).view(3, 9)
1 1 1 2 2 2 3 3 3
4 4 4 5 5 5 6 6 6
7 7 7 8 8 8 9 9 9
For the first, I don’t think there are operations that combine all of these together. Maxunpool does something similar but doesn’t have the repeat ability. Oh, in that case, neither of these solutions work:
>>> t = torch.tensor([[1, 2, 3], [4, 4, 4]])
>>> t
tensor([[1, 2, 3],
[4, 4, 4]])
>>> torch.cat(3*[t])
tensor([[1, 2, 3],
[4, 4, 4],
[1, 2, 3],
[4, 4, 4],
… Hi,
A leaf Variable is a variable that is at the beginning of the graph. That means that no operation tracked by the autograd engine created it.
This is what you want when you optimize neural networks as it is usually your weights or input.
So to be able to give weights to the optimizer, they sho… | 1,492 | {'text': ['For the second you can do:\n\nz.view(-1, 1).repeat(1, 3).view(3, 9)\n\n1 1 1 2 2 2 3 3 3\n\n4 4 4 5 5 5 6 6 6\n\n7 7 7 8 8 8 9 9 9\n\nFor the first, I don’t think there are operations that combine all of these together. Maxunpool does something similar but doesn’t have the repeat ability.'], 'answer_start': [1492]} |
Repeat examples along batch dimension | Hi, I’m trying to repeat tensors along the batch dimension.
Ex)
We have a batch (8 x 3 x 224 x 224) where its size is 8
and let’s say it is called as [a, b, c, d, e, f, g, h], and each alphabet denotes an example in the batch.
Then, I want to repeat each of them three times, resulting in EXACTLY… | 3 | 2019-02-02T08:18:19.681Z | Oh, in that case, neither of these solutions work:
>>> t = torch.tensor([[1, 2, 3], [4, 4, 4]])
>>> t
tensor([[1, 2, 3],
[4, 4, 4]])
>>> torch.cat(3*[t])
tensor([[1, 2, 3],
[4, 4, 4],
[1, 2, 3],
[4, 4, 4],
… | 6 | 2019-02-03T03:35:19.656Z | https://discuss.pytorch.org/t/repeat-examples-along-batch-dimension/36217/5 | For the second you can do:
z.view(-1, 1).repeat(1, 3).view(3, 9)
1 1 1 2 2 2 3 3 3
4 4 4 5 5 5 6 6 6
7 7 7 8 8 8 9 9 9
For the first, I don’t think there are operations that combine all of these together. Maxunpool does something similar but doesn’t have the repeat ability. Oh, in that case, neither of these solutions work:
>>> t = torch.tensor([[1, 2, 3], [4, 4, 4]])
>>> t
tensor([[1, 2, 3],
[4, 4, 4]])
>>> torch.cat(3*[t])
tensor([[1, 2, 3],
[4, 4, 4],
[1, 2, 3],
[4, 4, 4],
… Hi,
A leaf Variable is a variable that is at the beginning of the graph. That means that no operation tracked by the autograd engine created it.
This is what you want when you optimize neural networks as it is usually your weights or input.
So to be able to give weights to the optimizer, they sho… | 1,026 | {'text': ['Oh, in that case, neither of these solutions work:\n\n>>> t = torch.tensor([[1, 2, 3], [4, 4, 4]])\n\n>>> t\n\ntensor([[1, 2, 3],\n\n[4, 4, 4]])\n\n>>> torch.cat(3*[t])\n\ntensor([[1, 2, 3],\n\n[4, 4, 4],\n\n[1, 2, 3],\n\n[4, 4, 4],\n\n…'], 'answer_start': [1026]} |
ValueError: can't optimize a non-leaf Tensor? | Dear all:
i know when
x_cuda = x_cpu.to(device)
It will trigger error:
ValueError: can’t optimize a non-leaf Tensor
when you use optimizer = optim.Adam([x_cuda]). The right way may be optimizer = optim.Adam([x_cpu]). That’s to way, we need keep both reference of x_cpu and x_cuda.
Since in … | 4 | 2018-07-26T08:10:24.518Z | Hi,
A leaf Variable is a variable that is at the beginning of the graph. That means that no operation tracked by the autograd engine created it.
This is what you want when you optimize neural networks as it is usually your weights or input.
So to be able to give weights to the optimizer, they sho… | 47 | 2018-07-26T09:04:35.730Z | https://discuss.pytorch.org/t/valueerror-cant-optimize-a-non-leaf-tensor/21751/2 | For the second you can do:
z.view(-1, 1).repeat(1, 3).view(3, 9)
1 1 1 2 2 2 3 3 3
4 4 4 5 5 5 6 6 6
7 7 7 8 8 8 9 9 9
For the first, I don’t think there are operations that combine all of these together. Maxunpool does something similar but doesn’t have the repeat ability. Oh, in that case, neither of these solutions work:
>>> t = torch.tensor([[1, 2, 3], [4, 4, 4]])
>>> t
tensor([[1, 2, 3],
[4, 4, 4]])
>>> torch.cat(3*[t])
tensor([[1, 2, 3],
[4, 4, 4],
[1, 2, 3],
[4, 4, 4],
… Hi,
A leaf Variable is a variable that is at the beginning of the graph. That means that no operation tracked by the autograd engine created it.
This is what you want when you optimize neural networks as it is usually your weights or input.
So to be able to give weights to the optimizer, they sho… | 532 | {'text': ['Hi,\n\nA leaf Variable is a variable that is at the beginning of the graph. That means that no operation tracked by the autograd engine created it.\n\nThis is what you want when you optimize neural networks as it is usually your weights or input.\n\nSo to be able to give weights to the optimizer, they sho…'], 'answer_start': [532]} |
ValueError: Expected input batch_size (324) to match target batch_size (4) | I’m getting the following error. I have tried every of the solution provided on any platform but nothings working. My dataset is of facial expressions and all the images are in grayscale. The code is pasted below:
import torch
import torchvision
import torchvision.transforms as transforms
from tor… | 7 | 2018-09-04T20:02:39.923Z | Thank You so much. It really worked. Can you please tell me how and what you checked to conclude the problem? | 0 | 2018-09-05T12:35:17.768Z | https://discuss.pytorch.org/t/valueerror-expected-input-batch-size-324-to-match-target-batch-size-4/24498/5 | Thank You so much. It really worked. Can you please tell me how and what you checked to conclude the problem? In my experience, I would first build an HDF5 file with all your images, which you can build easily following the documentation of h5py on <a href="http://docs.h5py.org/en/latest/" rel="nofollow noopener">http://docs.h5py.org/en/latest/</a>. During training, build a class inheriting from Dataset which returns your images. Something along this line:
class dataset_h5(t… As of 1.8, PyTorch now has <a href="https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html" rel="noopener nofollow ugc">LazyLinear</a> which <a href="https://stackoverflow.com/a/68284577/9067615" rel="noopener nofollow ugc">infers the input dimension</a>:
A torch.nn.Linear module where in_features is inferred. | 1,680 | {'text': ['Thank You so much. It really worked. Can you please tell me how and what you checked to conclude the problem?'], 'answer_start': [1680]} |
How to speed up the data loader | Hi
I want to know how to speed up the dataloader. I am using torch.utils.data.DataLoader(8 workers) to train resnet18 on my own dataset. My environment is Ubuntu 16.04, 3 * Titan Xp, SSD 1T.
Epoch: [1079][0/232]
Time 5.149 (5.149)
Data 5.056 (5.056)
Loss 0.0648 (0.0648)
Prec@1 98.047 (98.047)
… | 4 | 2018-02-17T07:09:02.147Z | In my experience, I would first build an HDF5 file with all your images, which you can build easily following the documentation of h5py on <a href="http://docs.h5py.org/en/latest/" rel="nofollow noopener">http://docs.h5py.org/en/latest/</a>. During training, build a class inheriting from Dataset which returns your images. Something along this line:
class dataset_h5(t… | 34 | 2018-02-17T09:30:38.434Z | https://discuss.pytorch.org/t/how-to-speed-up-the-data-loader/13740/3 | Thank You so much. It really worked. Can you please tell me how and what you checked to conclude the problem? In my experience, I would first build an HDF5 file with all your images, which you can build easily following the documentation of h5py on <a href="http://docs.h5py.org/en/latest/" rel="nofollow noopener">http://docs.h5py.org/en/latest/</a>. During training, build a class inheriting from Dataset which returns your images. Something along this line:
class dataset_h5(t… As of 1.8, PyTorch now has <a href="https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html" rel="noopener nofollow ugc">LazyLinear</a> which <a href="https://stackoverflow.com/a/68284577/9067615" rel="noopener nofollow ugc">infers the input dimension</a>:
A torch.nn.Linear module where in_features is inferred. | 950 | {'text': ['In my experience, I would first build an HDF5 file with all your images, which you can build easily following the documentation of h5py on <a href="http://docs.h5py.org/en/latest/" rel="nofollow noopener">http://docs.h5py.org/en/latest/</a>. During training, build a class inheriting from Dataset which returns your images. Something along this line:\n\nclass dataset_h5(t…'], 'answer_start': [950]} |
Inferring shape via flatten operator | Is there a flatten-like operator to calculate the shape of a layer output. An example would be transitioning from a conv layer to linear layer. In all the examples I’ve seen thus far this seems to be manually calculated, ex:
class Net(nn.Module):
def __init__(self):
super(Net, self).__i… | 8 | 2017-01-22T21:41:26.352Z | As of 1.8, PyTorch now has <a href="https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html" rel="noopener nofollow ugc">LazyLinear</a> which <a href="https://stackoverflow.com/a/68284577/9067615" rel="noopener nofollow ugc">infers the input dimension</a>:
A torch.nn.Linear module where in_features is inferred. | 3 | 2021-07-07T11:11:46.843Z | https://discuss.pytorch.org/t/inferring-shape-via-flatten-operator/138/20 | Thank You so much. It really worked. Can you please tell me how and what you checked to conclude the problem? In my experience, I would first build an HDF5 file with all your images, which you can build easily following the documentation of h5py on <a href="http://docs.h5py.org/en/latest/" rel="nofollow noopener">http://docs.h5py.org/en/latest/</a>. During training, build a class inheriting from Dataset which returns your images. Something along this line:
class dataset_h5(t… As of 1.8, PyTorch now has <a href="https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html" rel="noopener nofollow ugc">LazyLinear</a> which <a href="https://stackoverflow.com/a/68284577/9067615" rel="noopener nofollow ugc">infers the input dimension</a>:
A torch.nn.Linear module where in_features is inferred. | 489 | {'text': ['As of 1.8, PyTorch now has <a href="https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html" rel="noopener nofollow ugc">LazyLinear</a> which <a href="https://stackoverflow.com/a/68284577/9067615" rel="noopener nofollow ugc">infers the input dimension</a>:\n\nA torch.nn.Linear module where in_features is inferred.'], 'answer_start': [489]} |
Understanding Convolution 1D output and Input | Hi,
I have input of dimension 32 x 100 x 1 where 32 is the batch size.
I wanted to convolved over 100 x 1 array in the input for each of the 32 such arrays i.e. a single data point in the batch has an array like that.
I hoped that conv1d(100, 100, 1) layer will work.
How does this convolves over… | 5 | 2018-11-28T11:36:20.355Z | Well, not really. Currently you are using a signal of shape [32, 100, 1], which corresponds to [batch_size, in_channels, len].
Each kernel in your conv layer creates an output channel, as <a class="mention" href="/u/krishnavishalv">@krishnavishalv</a> explained, and convolves the “temporal dimension”, i.e. the len dimension.
Since len is in you… | 19 | 2018-11-28T12:49:47.710Z | https://discuss.pytorch.org/t/understanding-convolution-1d-output-and-input/30764/6 | Well, not really. Currently you are using a signal of shape [32, 100, 1], which corresponds to [batch_size, in_channels, len].
Each kernel in your conv layer creates an output channel, as <a class="mention" href="/u/krishnavishalv">@krishnavishalv</a> explained, and convolves the “temporal dimension”, i.e. the len dimension.
Since len is in you… The weight_decay parameter adds a L2 penalty to the cost which can effectively lead to to smaller model weights. It seems to work in my case:
import torch
import numpy as np
np.random.seed(123)
np.set_printoptions(8, suppress=True)
x_numpy = np.random.random((3, 4)).astype(np.double)
w_numpy = np… the real answer to this is here:
[image]
<a href="https://discuss.pytorch.org/t/clone-and-detach-in-v0-4-0/16861/42">Clone and detach in v0.4.0</a>
Sorry if this repetitive but I still don’t get it. What is wrong with doing clone first and then detach i.e. .clone().detach() ?
Nothing. They will given an equivalent end result.
The minor optimizati… | 1,626 | {'text': ['Well, not really. Currently you are using a signal of shape [32, 100, 1], which corresponds to [batch_size, in_channels, len].\n\nEach kernel in your conv layer creates an output channel, as <a class="mention" href="/u/krishnavishalv">@krishnavishalv</a> explained, and convolves the “temporal dimension”, i.e. the len dimension.\n\nSince len is in you…'], 'answer_start': [1626]} |
How does SGD weight_decay work? | Hello,
i write a toy code to check SGD weight_decay.
but it seems to have no effect to the gradient update.
am i misunderstand the meaning of weight_decay?
thank you very much.
PyTorch 1.0
import torch
import numpy as np
np.random.seed(123)
np.set_printoptions(8, suppress=True)
x_numpy = np… | 3 | 2018-12-26T16:07:16.568Z | The weight_decay parameter adds a L2 penalty to the cost which can effectively lead to to smaller model weights. It seems to work in my case:
import torch
import numpy as np
np.random.seed(123)
np.set_printoptions(8, suppress=True)
x_numpy = np.random.random((3, 4)).astype(np.double)
w_numpy = np… | 13 | 2018-12-26T16:27:35.405Z | https://discuss.pytorch.org/t/how-does-sgd-weight-decay-work/33105/2 | Well, not really. Currently you are using a signal of shape [32, 100, 1], which corresponds to [batch_size, in_channels, len].
Each kernel in your conv layer creates an output channel, as <a class="mention" href="/u/krishnavishalv">@krishnavishalv</a> explained, and convolves the “temporal dimension”, i.e. the len dimension.
Since len is in you… The weight_decay parameter adds a L2 penalty to the cost which can effectively lead to to smaller model weights. It seems to work in my case:
import torch
import numpy as np
np.random.seed(123)
np.set_printoptions(8, suppress=True)
x_numpy = np.random.random((3, 4)).astype(np.double)
w_numpy = np… the real answer to this is here:
[image]
<a href="https://discuss.pytorch.org/t/clone-and-detach-in-v0-4-0/16861/42">Clone and detach in v0.4.0</a>
Sorry if this repetitive but I still don’t get it. What is wrong with doing clone first and then detach i.e. .clone().detach() ?
Nothing. They will given an equivalent end result.
The minor optimizati… | 1,170 | {'text': ['The weight_decay parameter adds a L2 penalty to the cost which can effectively lead to to smaller model weights. It seems to work in my case:\n\nimport torch\n\nimport numpy as np\n\nnp.random.seed(123)\n\nnp.set_printoptions(8, suppress=True)\n\nx_numpy = np.random.random((3, 4)).astype(np.double)\n\nw_numpy = np…'], 'answer_start': [1170]} |
Difference between detach().clone() and clone().detach() | can someone explain to me the difference between detach().clone() and clone().detach() for a tensor
A = torch.rand(2,2)
what is the difference between A.detach().clone() and A.clone().detach()
are they equal?
when i do detach it makes requres_grad false, and clone make a copy of it, but how the… | 6 | 2019-01-08T21:08:26.228Z | the real answer to this is here:
[image]
<a href="https://discuss.pytorch.org/t/clone-and-detach-in-v0-4-0/16861/42">Clone and detach in v0.4.0</a>
Sorry if this repetitive but I still don’t get it. What is wrong with doing clone first and then detach i.e. .clone().detach() ?
Nothing. They will given an equivalent end result.
The minor optimizati… | 0 | 2020-06-17T19:08:14.921Z | https://discuss.pytorch.org/t/difference-between-detach-clone-and-clone-detach/34173/14 | Well, not really. Currently you are using a signal of shape [32, 100, 1], which corresponds to [batch_size, in_channels, len].
Each kernel in your conv layer creates an output channel, as <a class="mention" href="/u/krishnavishalv">@krishnavishalv</a> explained, and convolves the “temporal dimension”, i.e. the len dimension.
Since len is in you… The weight_decay parameter adds a L2 penalty to the cost which can effectively lead to to smaller model weights. It seems to work in my case:
import torch
import numpy as np
np.random.seed(123)
np.set_printoptions(8, suppress=True)
x_numpy = np.random.random((3, 4)).astype(np.double)
w_numpy = np… the real answer to this is here:
[image]
<a href="https://discuss.pytorch.org/t/clone-and-detach-in-v0-4-0/16861/42">Clone and detach in v0.4.0</a>
Sorry if this repetitive but I still don’t get it. What is wrong with doing clone first and then detach i.e. .clone().detach() ?
Nothing. They will given an equivalent end result.
The minor optimizati… | 669 | {'text': ['the real answer to this is here:\n\n[image]\n\n<a href="https://discuss.pytorch.org/t/clone-and-detach-in-v0-4-0/16861/42">Clone and detach in v0.4.0</a>\n\nSorry if this repetitive but I still don’t get it. What is wrong with doing clone first and then detach i.e. .clone().detach() ?\n\nNothing. They will given an equivalent end result.\n\nThe minor optimizati…'], 'answer_start': [669]} |
[Solved] Reverse gradients in backward pass | Hello everyone,
I am working on building a DANN (Ganin et al. 2016) in PyTorch. This model is used for domain adaptation, and forces a classifier to only learn features that exist in two different domains, for the purpose of generalization across these domains. The DANN uses a Gradient Reversal lay… | 6 | 2017-05-31T16:13:02.824Z | I think that should work.
Also, I just realized that Function should be defined in a different way in the newer versions of pytorch:
class GradReverse(Function):
@staticmethod
def forward(ctx, x):
return x.view_as(x)
@staticmethod
def backward(ctx, grad_output):
r… | 18 | 2017-05-31T21:26:31.384Z | https://discuss.pytorch.org/t/solved-reverse-gradients-in-backward-pass/3589/4 | I think that should work.
Also, I just realized that Function should be defined in a different way in the newer versions of pytorch:
class GradReverse(Function):
@staticmethod
def forward(ctx, x):
return x.view_as(x)
@staticmethod
def backward(ctx, grad_output):
r… Well, the specified output size is the output size, as <a href="https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveAvgPool2d" rel="nofollow noopener">in the documentation</a>.
In more detail:
What happens is that the pooling stencil size (aka kernel size) is determined to be (input_size+target_size-1) // target_size, i.e. rounded up. With this Then the positions of where to apply the stencil … In this context dim refers to the dimension in which the softmax function will be applied.
>>> a = Variable(torch.randn(5,2))
>>> F.softmax(a, dim=1)
Variable containing:
0.6360 0.3640
0.3541 0.6459
0.2412 0.7588
0.0860 0.9140
0.6258 0.3742
[torch.FloatTensor of size 5x2]
>>> F.softmax(a… | 2,060 | {'text': ['I think that should work.\n\nAlso, I just realized that Function should be defined in a different way in the newer versions of pytorch:\n\nclass GradReverse(Function):\n\n@staticmethod\n\ndef forward(ctx, x):\n\nreturn x.view_as(x)\n\n@staticmethod\n\ndef backward(ctx, grad_output):\n\nr…'], 'answer_start': [2060]} |
What is AdaptiveAvgPool2d? | The AdaptiveAvgPool2d layers confuse me a lot.
Is there any math formula explaning it? | 3 | 2018-10-10T05:58:53.699Z | Well, the specified output size is the output size, as <a href="https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveAvgPool2d" rel="nofollow noopener">in the documentation</a>.
In more detail:
What happens is that the pooling stencil size (aka kernel size) is determined to be (input_size+target_size-1) // target_size, i.e. rounded up. With this Then the positions of where to apply the stencil … | 39 | 2018-10-10T06:34:39.250Z | https://discuss.pytorch.org/t/what-is-adaptiveavgpool2d/26897/2 | I think that should work.
Also, I just realized that Function should be defined in a different way in the newer versions of pytorch:
class GradReverse(Function):
@staticmethod
def forward(ctx, x):
return x.view_as(x)
@staticmethod
def backward(ctx, grad_output):
r… Well, the specified output size is the output size, as <a href="https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveAvgPool2d" rel="nofollow noopener">in the documentation</a>.
In more detail:
What happens is that the pooling stencil size (aka kernel size) is determined to be (input_size+target_size-1) // target_size, i.e. rounded up. With this Then the positions of where to apply the stencil … In this context dim refers to the dimension in which the softmax function will be applied.
>>> a = Variable(torch.randn(5,2))
>>> F.softmax(a, dim=1)
Variable containing:
0.6360 0.3640
0.3541 0.6459
0.2412 0.7588
0.0860 0.9140
0.6258 0.3742
[torch.FloatTensor of size 5x2]
>>> F.softmax(a… | 1,311 | {'text': ['Well, the specified output size is the output size, as <a href="https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveAvgPool2d" rel="nofollow noopener">in the documentation</a>.\n\nIn more detail:\n\nWhat happens is that the pooling stencil size (aka kernel size) is determined to be (input_size+target_size-1) // target_size, i.e. rounded up. With this Then the positions of where to apply the stencil …'], 'answer_start': [1311]} |
Implicit dimension choice for softmax warning | Hey guys,
I was following exactly the same as the tutorial says
which official <a href="http://PyTorch.org" rel="nofollow noopener">PyTorch.org</a> had given on their site.
However, I got stuck on the softmax function which shows no warning according to the tutorial, but my python gives me a warning message it says,
UserWarning: Implicit dimension cho… | 4 | 2018-01-15T08:08:25.630Z | In this context dim refers to the dimension in which the softmax function will be applied.
>>> a = Variable(torch.randn(5,2))
>>> F.softmax(a, dim=1)
Variable containing:
0.6360 0.3640
0.3541 0.6459
0.2412 0.7588
0.0860 0.9140
0.6258 0.3742
[torch.FloatTensor of size 5x2]
>>> F.softmax(a… | 25 | 2018-02-27T17:55:10.286Z | https://discuss.pytorch.org/t/implicit-dimension-choice-for-softmax-warning/12314/8 | I think that should work.
Also, I just realized that Function should be defined in a different way in the newer versions of pytorch:
class GradReverse(Function):
@staticmethod
def forward(ctx, x):
return x.view_as(x)
@staticmethod
def backward(ctx, grad_output):
r… Well, the specified output size is the output size, as <a href="https://pytorch.org/docs/master/nn.html#torch.nn.AdaptiveAvgPool2d" rel="nofollow noopener">in the documentation</a>.
In more detail:
What happens is that the pooling stencil size (aka kernel size) is determined to be (input_size+target_size-1) // target_size, i.e. rounded up. With this Then the positions of where to apply the stencil … In this context dim refers to the dimension in which the softmax function will be applied.
>>> a = Variable(torch.randn(5,2))
>>> F.softmax(a, dim=1)
Variable containing:
0.6360 0.3640
0.3541 0.6459
0.2412 0.7588
0.0860 0.9140
0.6258 0.3742
[torch.FloatTensor of size 5x2]
>>> F.softmax(a… | 695 | {'text': ['In this context dim refers to the dimension in which the softmax function will be applied.\n\n>>> a = Variable(torch.randn(5,2))\n\n>>> F.softmax(a, dim=1)\n\nVariable containing:\n\n0.6360 0.3640\n\n0.3541 0.6459\n\n0.2412 0.7588\n\n0.0860 0.9140\n\n0.6258 0.3742\n\n[torch.FloatTensor of size 5x2]\n\n>>> F.softmax(a…'], 'answer_start': [695]} |
DataParallel imbalanced memory usage | Hi there,
I’m going to re-edit the whole thread to introduce a unlikely behavior with DataParallel
Right now there are several recent posts about this topic and I would like to summarize the problem.
[image]
<a href="https://discuss.pytorch.org/t/cuda-error-out-of-memory-huge-embedding-layer/22556">CUDA error: out of memory - huge embedding layer</a> <a class="badge-wrapper bullet" href="/c/nlp">nlp</a>
I am wor… | 7 | 2018-08-06T22:29:49.617Z | Hi there <a class="mention" href="/u/alband">@albanD</a>, <a class="mention" href="/u/yuzhou_song">@Yuzhou_Song</a>
I noticed there is an small mistake with the code you provided:
It’s necessary to unsqueeze loss inside forward pass to DataParallel were able to build loss back. Loss provided by PyTorch loss functions seems not to have dimensions, and DataParallel mount batch back … | 1 | 2018-08-09T02:22:37.944Z | https://discuss.pytorch.org/t/dataparallel-imbalanced-memory-usage/22551/12 | Hi there <a class="mention" href="/u/alband">@albanD</a>, <a class="mention" href="/u/yuzhou_song">@Yuzhou_Song</a>
I noticed there is an small mistake with the code you provided:
It’s necessary to unsqueeze loss inside forward pass to DataParallel were able to build loss back. Loss provided by PyTorch loss functions seems not to have dimensions, and DataParallel mount batch back … Here is an implementation that will work for any k1 and k2 and will reduce memory usage as much as possible.
If k2 is not huge and the one_step_module is relatively big, the slowdown of doing multiple backward should be negligible.
The code is not super clean and has been tested only against curre… clamp(min=0) is exactly ReLU. | 2,066 | {'text': ['Hi there <a class="mention" href="/u/alband">@albanD</a>, <a class="mention" href="/u/yuzhou_song">@Yuzhou_Song</a>\n\nI noticed there is an small mistake with the code you provided:\n\nIt’s necessary to unsqueeze loss inside forward pass to DataParallel were able to build loss back. Loss provided by PyTorch loss functions seems not to have dimensions, and DataParallel mount batch back …'], 'answer_start': [2066]} |
Implementing Truncated Backpropagation Through Time | Hello,
I’m implementing a recursive network that is going to be trained with very long sequences. I had memory problems when training because of that excessive length and I decided to use a truncated-BPTT algorithm to train it as described <a href="https://machinelearningmastery.com/gentle-introduction-backpropagation-time/" rel="nofollow noopener">here</a>, that is,
every k1 steps backpropagate taking k2 back… | 6 | 2018-03-26T14:49:40.267Z | Here is an implementation that will work for any k1 and k2 and will reduce memory usage as much as possible.
If k2 is not huge and the one_step_module is relatively big, the slowdown of doing multiple backward should be negligible.
The code is not super clean and has been tested only against curre… | 24 | 2018-03-27T10:04:25.128Z | https://discuss.pytorch.org/t/implementing-truncated-backpropagation-through-time/15500/4 | Hi there <a class="mention" href="/u/alband">@albanD</a>, <a class="mention" href="/u/yuzhou_song">@Yuzhou_Song</a>
I noticed there is an small mistake with the code you provided:
It’s necessary to unsqueeze loss inside forward pass to DataParallel were able to build loss back. Loss provided by PyTorch loss functions seems not to have dimensions, and DataParallel mount batch back … Here is an implementation that will work for any k1 and k2 and will reduce memory usage as much as possible.
If k2 is not huge and the one_step_module is relatively big, the slowdown of doing multiple backward should be negligible.
The code is not super clean and has been tested only against curre… clamp(min=0) is exactly ReLU. | 1,427 | {'text': ['Here is an implementation that will work for any k1 and k2 and will reduce memory usage as much as possible.\n\nIf k2 is not huge and the one_step_module is relatively big, the slowdown of doing multiple backward should be negligible.\n\nThe code is not super clean and has been tested only against curre…'], 'answer_start': [1427]} |
Why does the .clamp function exist? | I was looking at the example:
<a href="http://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_autograd.html" class="onebox" target="_blank" rel="nofollow noopener">http://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_autograd.html</a>
and has the line:
# Forward pass: compute predicted y using operations on Variables; these
# are exactly the same operations we used to compute the forward pass using
# Tensors, but w… | 4 | 2017-07-14T17:24:46.308Z | clamp(min=0) is exactly ReLU. | 20 | 2017-07-14T17:53:00.943Z | https://discuss.pytorch.org/t/why-does-the-clamp-function-exist/4902/2 | Hi there <a class="mention" href="/u/alband">@albanD</a>, <a class="mention" href="/u/yuzhou_song">@Yuzhou_Song</a>
I noticed there is an small mistake with the code you provided:
It’s necessary to unsqueeze loss inside forward pass to DataParallel were able to build loss back. Loss provided by PyTorch loss functions seems not to have dimensions, and DataParallel mount batch back … Here is an implementation that will work for any k1 and k2 and will reduce memory usage as much as possible.
If k2 is not huge and the one_step_module is relatively big, the slowdown of doing multiple backward should be negligible.
The code is not super clean and has been tested only against curre… clamp(min=0) is exactly ReLU. | 703 | {'text': ['clamp(min=0) is exactly ReLU.'], 'answer_start': [703]} |
RuntimeError: Expected object of scalar type Double but got scalar type Float for argument #2 'weight' | my code is
classes=["not a face","face"]
path = "F:/project/Database/sample1.jpg"
b=cv2.imread(path)
q=torch.from_numpy(b)
print(q.shape)
d=np.transpose(q.numpy(), (2, 0, 1))
print(d.shape)
print(type(d))
w=torch.from_numpy(d)
w = w.unsqueeze(0)
w= w.double()
print(type(w))
print(w.shape)
print(typ… | 3 | 2019-03-05T09:06:58.421Z | Can you run, before you enter the training loop:
net = net.float()
It will transform the model parameters to float.
And then in your training loop:
z = net(x.float())
That should proceed without error.
PS: replace .float() by .double() if you wish to have network + data in double precision for… | 32 | 2019-03-05T11:01:26.946Z | https://discuss.pytorch.org/t/runtimeerror-expected-object-of-scalar-type-double-but-got-scalar-type-float-for-argument-2-weight/38961/9 | Can you run, before you enter the training loop:
net = net.float()
It will transform the model parameters to float.
And then in your training loop:
z = net(x.float())
That should proceed without error.
PS: replace .float() by .double() if you wish to have network + data in double precision for… If you really want a reshape layer, maybe you can wrap it into a nn.Module like this:
import torch.nn as nn
class Reshape(nn.Module):
def __init__(self, *args):
super(Reshape, self).__init__()
self.shape = args
def forward(self, x):
return x.view(self.shape) The following code should work in PyTorch 0.2:
def cross_entropy(pred, soft_targets):
logsoftmax = nn.LogSoftmax()
return torch.mean(torch.sum(- soft_targets * logsoftmax(pred), 1))
assuming pred and soft_targets are both Variables with shape (batchsize, num_of_classes), each row of pred i… | 1,464 | {'text': ['Can you run, before you enter the training loop:\n\nnet = net.float()\n\nIt will transform the model parameters to float.\n\nAnd then in your training loop:\n\nz = net(x.float())\n\nThat should proceed without error.\n\nPS: replace .float() by .double() if you wish to have network + data in double precision for…'], 'answer_start': [1464]} |
What is reshape layer in pytorch? | Hi all,
What is the reshape layer in pytorch?
In torch7 it seems to be nn.View, but what it is in pytorch?
What I want is to add a reshpe layer in nn.Sequential.
Thanks. | 4 | 2017-03-16T09:52:11.189Z | If you really want a reshape layer, maybe you can wrap it into a nn.Module like this:
import torch.nn as nn
class Reshape(nn.Module):
def __init__(self, *args):
super(Reshape, self).__init__()
self.shape = args
def forward(self, x):
return x.view(self.shape) | 11 | 2017-09-07T06:56:27.163Z | https://discuss.pytorch.org/t/what-is-reshape-layer-in-pytorch/1110/8 | Can you run, before you enter the training loop:
net = net.float()
It will transform the model parameters to float.
And then in your training loop:
z = net(x.float())
That should proceed without error.
PS: replace .float() by .double() if you wish to have network + data in double precision for… If you really want a reshape layer, maybe you can wrap it into a nn.Module like this:
import torch.nn as nn
class Reshape(nn.Module):
def __init__(self, *args):
super(Reshape, self).__init__()
self.shape = args
def forward(self, x):
return x.view(self.shape) The following code should work in PyTorch 0.2:
def cross_entropy(pred, soft_targets):
logsoftmax = nn.LogSoftmax()
return torch.mean(torch.sum(- soft_targets * logsoftmax(pred), 1))
assuming pred and soft_targets are both Variables with shape (batchsize, num_of_classes), each row of pred i… | 1,041 | {'text': ['If you really want a reshape layer, maybe you can wrap it into a nn.Module like this:\n\nimport torch.nn as nn\n\nclass Reshape(nn.Module):\n\ndef __init__(self, *args):\n\nsuper(Reshape, self).__init__()\n\nself.shape = args\n\ndef forward(self, x):\n\nreturn x.view(self.shape)'], 'answer_start': [1041]} |
How should I implement cross-entropy loss with continuous target outputs? | The current version of cross-entropy loss only accepts one-hot vectors for target outputs.
I need to implement a version of cross-entropy loss that supports continuous target distributions. What I don’t know is how to implement a version of cross-entropy loss that is numerically stable.
For exampl… | 6 | 2017-12-04T01:46:35.162Z | The following code should work in PyTorch 0.2:
def cross_entropy(pred, soft_targets):
logsoftmax = nn.LogSoftmax()
return torch.mean(torch.sum(- soft_targets * logsoftmax(pred), 1))
assuming pred and soft_targets are both Variables with shape (batchsize, num_of_classes), each row of pred i… | 18 | 2018-01-19T06:08:40.841Z | https://discuss.pytorch.org/t/how-should-i-implement-cross-entropy-loss-with-continuous-target-outputs/10720/19 | Can you run, before you enter the training loop:
net = net.float()
It will transform the model parameters to float.
And then in your training loop:
z = net(x.float())
That should proceed without error.
PS: replace .float() by .double() if you wish to have network + data in double precision for… If you really want a reshape layer, maybe you can wrap it into a nn.Module like this:
import torch.nn as nn
class Reshape(nn.Module):
def __init__(self, *args):
super(Reshape, self).__init__()
self.shape = args
def forward(self, x):
return x.view(self.shape) The following code should work in PyTorch 0.2:
def cross_entropy(pred, soft_targets):
logsoftmax = nn.LogSoftmax()
return torch.mean(torch.sum(- soft_targets * logsoftmax(pred), 1))
assuming pred and soft_targets are both Variables with shape (batchsize, num_of_classes), each row of pred i… | 575 | {'text': ['The following code should work in PyTorch 0.2:\n\ndef cross_entropy(pred, soft_targets):\n\nlogsoftmax = nn.LogSoftmax()\n\nreturn torch.mean(torch.sum(- soft_targets * logsoftmax(pred), 1))\n\nassuming pred and soft_targets are both Variables with shape (batchsize, num_of_classes), each row of pred i…'], 'answer_start': [575]} |
Run Pytorch on Multiple GPUs | Hello
Just a noobie question on running pytorch on multiple GPU.
If I simple specify this:
device = torch.device("cuda:0"),
this only runs on the single GPU unit right?
If I have multiple GPUs, and I want to utilize ALL OF THEM. What should I do?
Will below’s command automatically utilize all … | 5 | 2018-07-09T20:36:39.165Z | If I understand correctly what you should do to run on multiple GPUs is for all GPUs
net = torch.nn.DataParallel(model, device_ids=list(range(torch.cuda.device_count())))
if you want to use a set of specific ones:
net = torch.nn.DataParallel(model, device_ids=[0,1,2,5,10,...])
Note: you actuall… | 1 | 2020-11-11T14:23:20.661Z | https://discuss.pytorch.org/t/run-pytorch-on-multiple-gpus/20932/62 | If I understand correctly what you should do to run on multiple GPUs is for all GPUs
net = torch.nn.DataParallel(model, device_ids=list(range(torch.cuda.device_count())))
if you want to use a set of specific ones:
net = torch.nn.DataParallel(model, device_ids=[0,1,2,5,10,...])
Note: you actuall… Note that Resize will behave differently on input images with a different height and width.
From the <a href="https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.Resize">docs</a>:
size ( sequence or <a href="https://docs.python.org/3/library/functions.html#int"> int </a>) – Desired output size. If size is a sequence like (h, w), output size will be matched to this. If size is an int, smaller edge of the image will be matched to th… You can also apply class weighting using the weight argument for a lot of loss functions.
nn.NLLLoss or nn.CrossEntropyLoss both include this argument.
You can find all loss functions <a href="https://pytorch.org/docs/stable/nn.html#loss-functions">here</a>. | 1,754 | {'text': ['If I understand correctly what you should do to run on multiple GPUs is for all GPUs\n\nnet = torch.nn.DataParallel(model, device_ids=list(range(torch.cuda.device_count())))\n\nif you want to use a set of specific ones:\n\nnet = torch.nn.DataParallel(model, device_ids=[0,1,2,5,10,...])\n\nNote: you actuall…'], 'answer_start': [1754]} |
RuntimeError: stack expects each tensor to be equal size, but got [3, 224, 224] at entry 0 and [3, 224, 336] at entry 3 | Im trying to implement pretrained resnet50 on a image classification task with 42 labels and received this error. I dont understand what caused the layer size to change. Below is my code, i stitched them up from different tutorials I can find.
import torch
import torch.nn as nn
import torch.nn.func… | 4 | 2020-06-28T16:28:07.292Z | Note that Resize will behave differently on input images with a different height and width.
From the <a href="https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.Resize">docs</a>:
size ( sequence or <a href="https://docs.python.org/3/library/functions.html#int"> int </a>) – Desired output size. If size is a sequence like (h, w), output size will be matched to this. If size is an int, smaller edge of the image will be matched to th… | 28 | 2020-07-02T10:47:13.566Z | https://discuss.pytorch.org/t/runtimeerror-stack-expects-each-tensor-to-be-equal-size-but-got-3-224-224-at-entry-0-and-3-224-336-at-entry-3/87211/10 | If I understand correctly what you should do to run on multiple GPUs is for all GPUs
net = torch.nn.DataParallel(model, device_ids=list(range(torch.cuda.device_count())))
if you want to use a set of specific ones:
net = torch.nn.DataParallel(model, device_ids=[0,1,2,5,10,...])
Note: you actuall… Note that Resize will behave differently on input images with a different height and width.
From the <a href="https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.Resize">docs</a>:
size ( sequence or <a href="https://docs.python.org/3/library/functions.html#int"> int </a>) – Desired output size. If size is a sequence like (h, w), output size will be matched to this. If size is an int, smaller edge of the image will be matched to th… You can also apply class weighting using the weight argument for a lot of loss functions.
nn.NLLLoss or nn.CrossEntropyLoss both include this argument.
You can find all loss functions <a href="https://pytorch.org/docs/stable/nn.html#loss-functions">here</a>. | 1,185 | {'text': ['Note that Resize will behave differently on input images with a different height and width.\n\nFrom the <a href="https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.Resize">docs</a>:\n\nsize ( sequence or <a href="https://docs.python.org/3/library/functions.html#int"> int </a>) – Desired output size. If size is a sequence like (h, w), output size will be matched to this. If size is an int, smaller edge of the image will be matched to th…'], 'answer_start': [1185]} |
Dealing with imbalanced datasets in pytorch | I am trying to find a way to deal with imbalanced data in pytorch. I was used to Keras’ class_weight, although I am not sure what it really did (I think it was a matter of penalizing more or less certain classes).
The only solution that I find in pytorch is by using WeightedRandomSamplerwith DataLo… | 4 | 2018-08-07T13:37:47.850Z | You can also apply class weighting using the weight argument for a lot of loss functions.
nn.NLLLoss or nn.CrossEntropyLoss both include this argument.
You can find all loss functions <a href="https://pytorch.org/docs/stable/nn.html#loss-functions">here</a>. | 9 | 2018-08-07T13:46:34.724Z | https://discuss.pytorch.org/t/dealing-with-imbalanced-datasets-in-pytorch/22596/2 | If I understand correctly what you should do to run on multiple GPUs is for all GPUs
net = torch.nn.DataParallel(model, device_ids=list(range(torch.cuda.device_count())))
if you want to use a set of specific ones:
net = torch.nn.DataParallel(model, device_ids=[0,1,2,5,10,...])
Note: you actuall… Note that Resize will behave differently on input images with a different height and width.
From the <a href="https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.Resize">docs</a>:
size ( sequence or <a href="https://docs.python.org/3/library/functions.html#int"> int </a>) – Desired output size. If size is a sequence like (h, w), output size will be matched to this. If size is an int, smaller edge of the image will be matched to th… You can also apply class weighting using the weight argument for a lot of loss functions.
nn.NLLLoss or nn.CrossEntropyLoss both include this argument.
You can find all loss functions <a href="https://pytorch.org/docs/stable/nn.html#loss-functions">here</a>. | 787 | {'text': ['You can also apply class weighting using the weight argument for a lot of loss functions.\n\nnn.NLLLoss or nn.CrossEntropyLoss both include this argument.\n\nYou can find all loss functions <a href="https://pytorch.org/docs/stable/nn.html#loss-functions">here</a>.'], 'answer_start': [787]} |
Grayscale to RGB transform | Some of the images I have in the dataset are gray-scale, thus, I need to convert them to RGB, by replicating the gray-scale to each band. I am using a transforms.lambda to do that, based on torch.cat. However, this seems to not give the expected results
Example: Let xx be some image of size 28x28,… | 3 | 2018-05-18T14:53:16.103Z | While loading your images, you could use Image.open(path).convert('RGB') on all images.
If you are using ImageFolder, this functionality should be already there using the default loader.
Alternatively, you could repeat the values:
x = torch.randn(28, 28)
x.unsqueeze_(0)
x = x.repeat(3, 1, 1)
x.sh… | 26 | 2018-05-18T15:03:24.315Z | https://discuss.pytorch.org/t/grayscale-to-rgb-transform/18315/2 | While loading your images, you could use Image.open(path).convert('RGB') on all images.
If you are using ImageFolder, this functionality should be already there using the default loader.
Alternatively, you could repeat the values:
x = torch.randn(28, 28)
x.unsqueeze_(0)
x = x.repeat(3, 1, 1)
x.sh… Hi LMA,
In avg_pool2d, we define a kernel and stride size for the pooling operation, and the function just performs that operation on all valid inputs. For example, an avg_pool2d with kernel=3, stride=2 and padding=0, would reduce a 5x5 tensor to a 3x3 tensor, and a 7x7 tensor to a 4x4 tensor.(HxW) … The in_channels in Pytorch’s nn.Conv2d correspond to the number of channels in your input.
Based on the input shape, it looks like you have 1 channel and a spatial size of 28x28.
Your first conv layer expects 28 input channels, which won’t work, so you should change it to 1.
Also the Dense layers… | 2,094 | {'text': ['While loading your images, you could use Image.open(path).convert('RGB') on all images.\n\nIf you are using ImageFolder, this functionality should be already there using the default loader.\n\nAlternatively, you could repeat the values:\n\nx = torch.randn(28, 28)\n\nx.unsqueeze_(0)\n\nx = x.repeat(3, 1, 1)\n\nx.sh…'], 'answer_start': [2094]} |
Adaptive_avg_pool2d vs avg_pool2d | What is the difference between adaptive_avg_pool2d and avg_pool2d under torch.nn.functional? What does adaptive mean? | 3 | 2018-10-11T02:03:33.259Z | Hi LMA,
In avg_pool2d, we define a kernel and stride size for the pooling operation, and the function just performs that operation on all valid inputs. For example, an avg_pool2d with kernel=3, stride=2 and padding=0, would reduce a 5x5 tensor to a 3x3 tensor, and a 7x7 tensor to a 4x4 tensor.(HxW) … | 16 | 2018-10-11T07:42:53.659Z | https://discuss.pytorch.org/t/adaptive-avg-pool2d-vs-avg-pool2d/27011/2 | While loading your images, you could use Image.open(path).convert('RGB') on all images.
If you are using ImageFolder, this functionality should be already there using the default loader.
Alternatively, you could repeat the values:
x = torch.randn(28, 28)
x.unsqueeze_(0)
x = x.repeat(3, 1, 1)
x.sh… Hi LMA,
In avg_pool2d, we define a kernel and stride size for the pooling operation, and the function just performs that operation on all valid inputs. For example, an avg_pool2d with kernel=3, stride=2 and padding=0, would reduce a 5x5 tensor to a 3x3 tensor, and a 7x7 tensor to a 4x4 tensor.(HxW) … The in_channels in Pytorch’s nn.Conv2d correspond to the number of channels in your input.
Based on the input shape, it looks like you have 1 channel and a spatial size of 28x28.
Your first conv layer expects 28 input channels, which won’t work, so you should change it to 1.
Also the Dense layers… | 1,367 | {'text': ['Hi LMA,\n\nIn avg_pool2d, we define a kernel and stride size for the pooling operation, and the function just performs that operation on all valid inputs. For example, an avg_pool2d with kernel=3, stride=2 and padding=0, would reduce a 5x5 tensor to a 3x3 tensor, and a 7x7 tensor to a 4x4 tensor.(HxW) …'], 'answer_start': [1367]} |
Pytorch equivalent of Keras | I’m trying to convert CNN model code from Keras to Pytorch.
here is the original keras model:
input_shape = (28, 28, 1)
model = Sequential()
model.add(Conv2D(28, kernel_size=(3,3), input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten()) # Flattening the 2D arrays f… | 4 | 2018-11-12T20:33:34.021Z | The in_channels in Pytorch’s nn.Conv2d correspond to the number of channels in your input.
Based on the input shape, it looks like you have 1 channel and a spatial size of 28x28.
Your first conv layer expects 28 input channels, which won’t work, so you should change it to 1.
Also the Dense layers… | 9 | 2018-11-12T20:51:40.097Z | https://discuss.pytorch.org/t/pytorch-equivalent-of-keras/29412/2 | While loading your images, you could use Image.open(path).convert('RGB') on all images.
If you are using ImageFolder, this functionality should be already there using the default loader.
Alternatively, you could repeat the values:
x = torch.randn(28, 28)
x.unsqueeze_(0)
x = x.repeat(3, 1, 1)
x.sh… Hi LMA,
In avg_pool2d, we define a kernel and stride size for the pooling operation, and the function just performs that operation on all valid inputs. For example, an avg_pool2d with kernel=3, stride=2 and padding=0, would reduce a 5x5 tensor to a 3x3 tensor, and a 7x7 tensor to a 4x4 tensor.(HxW) … The in_channels in Pytorch’s nn.Conv2d correspond to the number of channels in your input.
Based on the input shape, it looks like you have 1 channel and a spatial size of 28x28.
Your first conv layer expects 28 input channels, which won’t work, so you should change it to 1.
Also the Dense layers… | 630 | {'text': ['The in_channels in Pytorch’s nn.Conv2d correspond to the number of channels in your input.\n\nBased on the input shape, it looks like you have 1 channel and a spatial size of 28x28.\n\nYour first conv layer expects 28 input channels, which won’t work, so you should change it to 1.\n\nAlso the Dense layers…'], 'answer_start': [630]} |
Creating a mask tensor from an index tensor | I’m trying to create a mask based on an index tensor.
The mask size is [6, 1, 25]
The index size is [6, 1, 12]
First I have an index tensor indices:
print(indices)
tensor([[[ 0, 1, 2, 5, 6, 7, 12, 17, 18, 22, 23, 21]],
[[ 2, 3, 4, 7, 8, 9, 15, 16, 20, 21, 22, 13]],
[[… | 4 | 2018-12-08T20:18:06.162Z | I think you could use scatter_:
mask = torch.zeros(6, 1, 25)
mask.scatter_(2, indices, 1.) | 11 | 2018-12-08T20:23:33.972Z | https://discuss.pytorch.org/t/creating-a-mask-tensor-from-an-index-tensor/31648/2 | I think you could use scatter_:
mask = torch.zeros(6, 1, 25)
mask.scatter_(2, indices, 1.) Hi, consider that if you only pass the desired parameter into the optimizer but nothing else, you are only pdating that parameter which is, indeed, equivalent to freeze that layer. However, you aren’t zeroing gradients for the other layers but accumulating them as they arent affected by optimizer.ze… Have a look at <a href="https://discuss.pytorch.org/t/multi-label-classification-in-pytorch/905/45">this post</a> for a small example on multi label classification.
You could use multi-hot encoded targets, nn.BCE(WithLogits)Loss and an output layer returning [batch_size, nb_classes] (same as in multi-class classification). | 1,876 | {'text': ['I think you could use scatter_:\n\nmask = torch.zeros(6, 1, 25)\n\nmask.scatter_(2, indices, 1.)'], 'answer_start': [1876]} |
Best practice for freezing layers? | There are many posts asking how to freeze layer, but the different authors have a somewhat different approach. Most of the time I saw something like this:
Imagine we have a nn.Sequential and only want to train the last layer:
for parameter in model.parameters():
parameter.requires_grad = False… | 6 | 2019-10-14T08:25:50.727Z | Hi, consider that if you only pass the desired parameter into the optimizer but nothing else, you are only pdating that parameter which is, indeed, equivalent to freeze that layer. However, you aren’t zeroing gradients for the other layers but accumulating them as they arent affected by optimizer.ze… | 6 | 2019-10-14T08:41:41.090Z | https://discuss.pytorch.org/t/best-practice-for-freezing-layers/58156/2 | I think you could use scatter_:
mask = torch.zeros(6, 1, 25)
mask.scatter_(2, indices, 1.) Hi, consider that if you only pass the desired parameter into the optimizer but nothing else, you are only pdating that parameter which is, indeed, equivalent to freeze that layer. However, you aren’t zeroing gradients for the other layers but accumulating them as they arent affected by optimizer.ze… Have a look at <a href="https://discuss.pytorch.org/t/multi-label-classification-in-pytorch/905/45">this post</a> for a small example on multi label classification.
You could use multi-hot encoded targets, nn.BCE(WithLogits)Loss and an output layer returning [batch_size, nb_classes] (same as in multi-class classification). | 1,031 | {'text': ['Hi, consider that if you only pass the desired parameter into the optimizer but nothing else, you are only pdating that parameter which is, indeed, equivalent to freeze that layer. However, you aren’t zeroing gradients for the other layers but accumulating them as they arent affected by optimizer.ze…'], 'answer_start': [1031]} |
Is there an example for multi class multilabel classification in Pytorch? | Hello everyone.
How can I do multiclass multi label classification in Pytorch? Is there a tutorial or example somewhere that I can use?
I’d be grateful if anyone can help in this regard
Thank you all in advance | 3 | 2019-08-17T03:33:03.237Z | Have a look at <a href="https://discuss.pytorch.org/t/multi-label-classification-in-pytorch/905/45">this post</a> for a small example on multi label classification.
You could use multi-hot encoded targets, nn.BCE(WithLogits)Loss and an output layer returning [batch_size, nb_classes] (same as in multi-class classification). | 10 | 2019-08-17T12:57:20.659Z | https://discuss.pytorch.org/t/is-there-an-example-for-multi-class-multilabel-classification-in-pytorch/53579/7 | I think you could use scatter_:
mask = torch.zeros(6, 1, 25)
mask.scatter_(2, indices, 1.) Hi, consider that if you only pass the desired parameter into the optimizer but nothing else, you are only pdating that parameter which is, indeed, equivalent to freeze that layer. However, you aren’t zeroing gradients for the other layers but accumulating them as they arent affected by optimizer.ze… Have a look at <a href="https://discuss.pytorch.org/t/multi-label-classification-in-pytorch/905/45">this post</a> for a small example on multi label classification.
You could use multi-hot encoded targets, nn.BCE(WithLogits)Loss and an output layer returning [batch_size, nb_classes] (same as in multi-class classification). | 402 | {'text': ['Have a look at <a href="https://discuss.pytorch.org/t/multi-label-classification-in-pytorch/905/45">this post</a> for a small example on multi label classification.\n\nYou could use multi-hot encoded targets, nn.BCE(WithLogits)Loss and an output layer returning [batch_size, nb_classes] (same as in multi-class classification).'], 'answer_start': [402]} |
How to use my own sampler when I already use DistributedSampler? | I want to use my custom sampler (for example, I need oversampling and I want to use this repo: <a href="https://github.com/ufoym/imbalanced-dataset-sampler" rel="nofollow noopener">https://github.com/ufoym/imbalanced-dataset-sampler</a>), but I already use DistributedSampler for DataLoader, because I use multi-gpu training. How can I pass to DataLoader one more sampler or maybe I can do … | 6 | 2019-11-25T18:51:02.863Z | Just found DistributedSamplerWrapper from <a href="https://github.com/catalyst-team/catalyst/blob/master/catalyst/data/sampler.py" rel="nofollow noopener">here</a>. It allows you to wrap DistributedSampler on the top of existing sampler. Might be good feature to add in PyTorch! | 9 | 2020-04-20T13:08:15.991Z | https://discuss.pytorch.org/t/how-to-use-my-own-sampler-when-i-already-use-distributedsampler/62143/22 | Just found DistributedSamplerWrapper from <a href="https://github.com/catalyst-team/catalyst/blob/master/catalyst/data/sampler.py" rel="nofollow noopener">here</a>. It allows you to wrap DistributedSampler on the top of existing sampler. Might be good feature to add in PyTorch! You are trying to access an undefined key 'fc[0]' in print(activation['fc[0]']), while you are registering the hook with 'fc'.
Also, you are registering the hook after the forward pass, so you would have to rerun the forward pass to store the activation or register the hook before the first forward… Hi,
You can use the .clone() function directly on the Variable to create a copy. | 1,454 | {'text': ['Just found DistributedSamplerWrapper from <a href="https://github.com/catalyst-team/catalyst/blob/master/catalyst/data/sampler.py" rel="nofollow noopener">here</a>. It allows you to wrap DistributedSampler on the top of existing sampler. Might be good feature to add in PyTorch!'], 'answer_start': [1454]} |
How can l load my best model as a feature extractor/evaluator? | Hello,
l have stored my best model where the network is as follow
net
My_Net(
(cl1): Linear(in_features=25, out_features=6, bias=True)
(cl2): Linear(in_features=60, out_features=16, bias=True)
(fc1): Linear(in_features=16, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_fe… | 3 | 2018-04-30T19:12:24.402Z | You are trying to access an undefined key 'fc[0]' in print(activation['fc[0]']), while you are registering the hook with 'fc'.
Also, you are registering the hook after the forward pass, so you would have to rerun the forward pass to store the activation or register the hook before the first forward… | 0 | 2020-10-20T01:59:18.226Z | https://discuss.pytorch.org/t/how-can-l-load-my-best-model-as-a-feature-extractor-evaluator/17254/54 | Just found DistributedSamplerWrapper from <a href="https://github.com/catalyst-team/catalyst/blob/master/catalyst/data/sampler.py" rel="nofollow noopener">here</a>. It allows you to wrap DistributedSampler on the top of existing sampler. Might be good feature to add in PyTorch! You are trying to access an undefined key 'fc[0]' in print(activation['fc[0]']), while you are registering the hook with 'fc'.
Also, you are registering the hook after the forward pass, so you would have to rerun the forward pass to store the activation or register the hook before the first forward… Hi,
You can use the .clone() function directly on the Variable to create a copy. | 1,006 | {'text': ['You are trying to access an undefined key 'fc[0]' in print(activation['fc[0]']), while you are registering the hook with 'fc'.\n\nAlso, you are registering the hook after the forward pass, so you would have to rerun the forward pass to store the activation or register the hook before the first forward…'], 'answer_start': [1006]} |
How to copy a Variable in a network graph | If I need to copy a variable created by an operation instead of user,and
let the copy have an independent memory,How can I do for that purpose?
Thank you! | 5 | 2017-04-02T15:35:40.029Z | Hi,
You can use the .clone() function directly on the Variable to create a copy. | 3 | 2017-04-02T16:07:24.199Z | https://discuss.pytorch.org/t/how-to-copy-a-variable-in-a-network-graph/1603/2 | Just found DistributedSamplerWrapper from <a href="https://github.com/catalyst-team/catalyst/blob/master/catalyst/data/sampler.py" rel="nofollow noopener">here</a>. It allows you to wrap DistributedSampler on the top of existing sampler. Might be good feature to add in PyTorch! You are trying to access an undefined key 'fc[0]' in print(activation['fc[0]']), while you are registering the hook with 'fc'.
Also, you are registering the hook after the forward pass, so you would have to rerun the forward pass to store the activation or register the hook before the first forward… Hi,
You can use the .clone() function directly on the Variable to create a copy. | 612 | {'text': ['Hi,\n\nYou can use the .clone() function directly on the Variable to create a copy.'], 'answer_start': [612]} |
Could someone explain batch_first=True in LSTM | I can’t figure out how it works. I try to change bs,time step, input size with batch_first = True or no batch_first. It return the dimension that I feed to model. so I have to change it manually ?? | 3 | 2018-03-24T15:55:50.248Z | When you do
print(out[-1])
you are taking the last element of the batch dimension.
You probably wanted to do
print(out[:, -1]) | 3 | 2018-03-25T11:12:46.335Z | https://discuss.pytorch.org/t/could-someone-explain-batch-first-true-in-lstm/15402/7 | When you do
print(out[-1])
you are taking the last element of the batch dimension.
You probably wanted to do
print(out[:, -1]) You would not only change the loss scale, but also the gradients:
# setup
model = nn.Linear(10, 10)
x = torch.randn(10, 10)
y = torch.randn(10, 10)
# mean
criterion = nn.MSELoss(reduction='mean')
out = model(x)
loss = criterion(out, y)
loss.backward()
print(model.weight.grad.abs().sum())
> tensor(… I agree with all your analysis on the magnitude of the gradients, and I agree that it depends on the loss function. But even with MSE loss fn, it can lead to different conclusions:
If the fw-bw has processed 8X data, we should set lr to 8X, meaning that the model should take a larger step if it ha… | 1,386 | {'text': ['When you do\n\nprint(out[-1])\n\nyou are taking the last element of the batch dimension.\n\nYou probably wanted to do\n\nprint(out[:, -1])'], 'answer_start': [1386]} |
Loss reduction sum vs mean: when to use each? | I’m rather new to pytorch (and NN architecture in general). While experimenting with my model I see that the various Loss classes for pytorch will accept a reduction parameter (none | sum | mean) for example. The differences are rather obvious regarding what will be returned, but I’m curious when … | 6 | 2021-03-23T04:06:00.843Z | You would not only change the loss scale, but also the gradients:
# setup
model = nn.Linear(10, 10)
x = torch.randn(10, 10)
y = torch.randn(10, 10)
# mean
criterion = nn.MSELoss(reduction='mean')
out = model(x)
loss = criterion(out, y)
loss.backward()
print(model.weight.grad.abs().sum())
> tensor(… | 5 | 2021-03-23T05:19:27.739Z | https://discuss.pytorch.org/t/loss-reduction-sum-vs-mean-when-to-use-each/115641/2 | When you do
print(out[-1])
you are taking the last element of the batch dimension.
You probably wanted to do
print(out[:, -1]) You would not only change the loss scale, but also the gradients:
# setup
model = nn.Linear(10, 10)
x = torch.randn(10, 10)
y = torch.randn(10, 10)
# mean
criterion = nn.MSELoss(reduction='mean')
out = model(x)
loss = criterion(out, y)
loss.backward()
print(model.weight.grad.abs().sum())
> tensor(… I agree with all your analysis on the magnitude of the gradients, and I agree that it depends on the loss function. But even with MSE loss fn, it can lead to different conclusions:
If the fw-bw has processed 8X data, we should set lr to 8X, meaning that the model should take a larger step if it ha… | 824 | {'text': ['You would not only change the loss scale, but also the gradients:\n\n# setup\n\nmodel = nn.Linear(10, 10)\n\nx = torch.randn(10, 10)\n\ny = torch.randn(10, 10)\n\n# mean\n\ncriterion = nn.MSELoss(reduction='mean')\n\nout = model(x)\n\nloss = criterion(out, y)\n\nloss.backward()\n\nprint(model.weight.grad.abs().sum())\n\n> tensor(…'], 'answer_start': [824]} |
Should we split batch_size according to ngpu_per_node when DistributedDataparallel | Assume we have two nodes: node-A and node-B, each has 4gpus(i.e. ngpu_per_node=4). We set args.batch_size = 256 on each node, means that we want each node process 256 images in each forward.
(1) If we use DistributedDataparallel with 1gpu-per-process mode, shall we manually divide the batchsize by … | 6 | 2020-03-10T19:41:49.045Z | I agree with all your analysis on the magnitude of the gradients, and I agree that it depends on the loss function. But even with MSE loss fn, it can lead to different conclusions:
If the fw-bw has processed 8X data, we should set lr to 8X, meaning that the model should take a larger step if it ha… | 3 | 2020-03-11T21:27:30.069Z | https://discuss.pytorch.org/t/should-we-split-batch-size-according-to-ngpu-per-node-when-distributeddataparallel/72769/6 | When you do
print(out[-1])
you are taking the last element of the batch dimension.
You probably wanted to do
print(out[:, -1]) You would not only change the loss scale, but also the gradients:
# setup
model = nn.Linear(10, 10)
x = torch.randn(10, 10)
y = torch.randn(10, 10)
# mean
criterion = nn.MSELoss(reduction='mean')
out = model(x)
loss = criterion(out, y)
loss.backward()
print(model.weight.grad.abs().sum())
> tensor(… I agree with all your analysis on the magnitude of the gradients, and I agree that it depends on the loss function. But even with MSE loss fn, it can lead to different conclusions:
If the fw-bw has processed 8X data, we should set lr to 8X, meaning that the model should take a larger step if it ha… | 460 | {'text': ['I agree with all your analysis on the magnitude of the gradients, and I agree that it depends on the loss function. But even with MSE loss fn, it can lead to different conclusions:\n\nIf the fw-bw has processed 8X data, we should set lr to 8X, meaning that the model should take a larger step if it ha…'], 'answer_start': [460]} |
Pytorch Coding Conventions | Hello everybody,
I am new to PyTorch, and I am looking for some PyTorch Coding Conventions or Best Practices. PyTorch is fantastic to allow you a lot of freedom, but it can sometimes be challenging to find something in someone else code when they have a completely different way of coding with PyTor… | 5 | 2019-04-14T15:36:44.944Z | <a class="mention" href="/u/lucasvandroux">@LucasVandroux</a>, thanks for referring to my unofficial style guide.
<a class="mention" href="/u/justusschock">@justusschock</a> and <a class="mention" href="/u/tom">@tom</a>, I added most of your recommendation to my style guide.
Feel free to add more if you feel like:
[image]
<a href="https://github.com/IgorSusmelj/pytorch-styleguide" target="_blank" rel="nofollow noopener">IgorSusmelj/pytorch-styleguide</a>
An inofficial styleguide and best practices summary for PyTor… | 3 | 2019-04-17T08:09:51.022Z | https://discuss.pytorch.org/t/pytorch-coding-conventions/42548/10 | <a class="mention" href="/u/lucasvandroux">@LucasVandroux</a>, thanks for referring to my unofficial style guide.
<a class="mention" href="/u/justusschock">@justusschock</a> and <a class="mention" href="/u/tom">@tom</a>, I added most of your recommendation to my style guide.
Feel free to add more if you feel like:
[image]
<a href="https://github.com/IgorSusmelj/pytorch-styleguide" target="_blank" rel="nofollow noopener">IgorSusmelj/pytorch-styleguide</a>
An inofficial styleguide and best practices summary for PyTor… I had this same issue where setting CUDA_VISIBLE_DEVICES=2 python train.py works but setting os.environ['CUDA_VISIBLE_DEVICES'] = "2" didn’t. The cause of the issue for me was importing the torch packages before setting os.environ['CUDA_VISIBLE_DEVICES'], moving it to the top of the file before imp… Thanks for the code.
It looks like you would like to swap the last linear layer of the pretrained ResNet with your nn.Sequential block.
However, resnet does not use self.classifier as its last layer, but self.fc. This also explains the error, since you are currently setting the required_grad flag … | 1,534 | {'text': ['<a class="mention" href="/u/lucasvandroux">@LucasVandroux</a>, thanks for referring to my unofficial style guide.\n\n<a class="mention" href="/u/justusschock">@justusschock</a> and <a class="mention" href="/u/tom">@tom</a>, I added most of your recommendation to my style guide.\n\nFeel free to add more if you feel like:\n\n[image]\n\n<a href="https://github.com/IgorSusmelj/pytorch-styleguide" target="_blank" rel="nofollow noopener">IgorSusmelj/pytorch-styleguide</a>\n\nAn inofficial styleguide and best practices summary for PyTor…'], 'answer_start': [1534]} |
CUDA_VISIBLE_DEVICE is of no use | I have a 4-titan XP GPU server. When i use os.environ[“CUDA_VISIBLE_DEVICES”] =“0,1” to allocate GPUs for a task in python, I find that only GPU 0 is used. And there is out of memroy problems even GPU 1 is free.
Should I allocate memory to different GPUs myself? | 1 | 2017-11-16T04:37:15.097Z | I had this same issue where setting CUDA_VISIBLE_DEVICES=2 python train.py works but setting os.environ['CUDA_VISIBLE_DEVICES'] = "2" didn’t. The cause of the issue for me was importing the torch packages before setting os.environ['CUDA_VISIBLE_DEVICES'], moving it to the top of the file before imp… | 22 | 2019-06-26T15:25:40.198Z | https://discuss.pytorch.org/t/cuda-visible-device-is-of-no-use/10018/12 | <a class="mention" href="/u/lucasvandroux">@LucasVandroux</a>, thanks for referring to my unofficial style guide.
<a class="mention" href="/u/justusschock">@justusschock</a> and <a class="mention" href="/u/tom">@tom</a>, I added most of your recommendation to my style guide.
Feel free to add more if you feel like:
[image]
<a href="https://github.com/IgorSusmelj/pytorch-styleguide" target="_blank" rel="nofollow noopener">IgorSusmelj/pytorch-styleguide</a>
An inofficial styleguide and best practices summary for PyTor… I had this same issue where setting CUDA_VISIBLE_DEVICES=2 python train.py works but setting os.environ['CUDA_VISIBLE_DEVICES'] = "2" didn’t. The cause of the issue for me was importing the torch packages before setting os.environ['CUDA_VISIBLE_DEVICES'], moving it to the top of the file before imp… Thanks for the code.
It looks like you would like to swap the last linear layer of the pretrained ResNet with your nn.Sequential block.
However, resnet does not use self.classifier as its last layer, but self.fc. This also explains the error, since you are currently setting the required_grad flag … | 1,302 | {'text': ['I had this same issue where setting CUDA_VISIBLE_DEVICES=2 python train.py works but setting os.environ['CUDA_VISIBLE_DEVICES'] = "2" didn’t. The cause of the issue for me was importing the torch packages before setting os.environ['CUDA_VISIBLE_DEVICES'], moving it to the top of the file before imp…'], 'answer_start': [1302]} |
Element 0 of tensors does not require grad and does not have a grad_fn | Hi everybody,
I’ve been trying to debug what is happening but don’t know what’s wrong.
If you need more info let me know.
Regards!
epochs = 10
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for inputs, labels in train_loader:
steps += 1
inputs, labels = inputs.… | 3 | 2018-12-23T21:40:53.961Z | Thanks for the code.
It looks like you would like to swap the last linear layer of the pretrained ResNet with your nn.Sequential block.
However, resnet does not use self.classifier as its last layer, but self.fc. This also explains the error, since you are currently setting the required_grad flag … | 14 | 2018-12-24T00:17:41.470Z | https://discuss.pytorch.org/t/element-0-of-tensors-does-not-require-grad-and-does-not-have-a-grad-fn/32908/4 | <a class="mention" href="/u/lucasvandroux">@LucasVandroux</a>, thanks for referring to my unofficial style guide.
<a class="mention" href="/u/justusschock">@justusschock</a> and <a class="mention" href="/u/tom">@tom</a>, I added most of your recommendation to my style guide.
Feel free to add more if you feel like:
[image]
<a href="https://github.com/IgorSusmelj/pytorch-styleguide" target="_blank" rel="nofollow noopener">IgorSusmelj/pytorch-styleguide</a>
An inofficial styleguide and best practices summary for PyTor… I had this same issue where setting CUDA_VISIBLE_DEVICES=2 python train.py works but setting os.environ['CUDA_VISIBLE_DEVICES'] = "2" didn’t. The cause of the issue for me was importing the torch packages before setting os.environ['CUDA_VISIBLE_DEVICES'], moving it to the top of the file before imp… Thanks for the code.
It looks like you would like to swap the last linear layer of the pretrained ResNet with your nn.Sequential block.
However, resnet does not use self.classifier as its last layer, but self.fc. This also explains the error, since you are currently setting the required_grad flag … | 870 | {'text': ['Thanks for the code.\n\nIt looks like you would like to swap the last linear layer of the pretrained ResNet with your nn.Sequential block.\n\nHowever, resnet does not use self.classifier as its last layer, but self.fc. This also explains the error, since you are currently setting the required_grad flag …'], 'answer_start': [870]} |
Confused about "set_grad_enabled" | I am confused about the following <a href="https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html" rel="nofollow noopener">snippet</a> taken from the tutorials about transfer learning.
for phase in ['train', 'val']:
if phase == 'train':
scheduler.step()
model.train() # Set model to training mode
else:
model.eval() … | 4 | 2019-02-27T15:35:43.700Z | model.train() and model.eval() change the behavior of some layers. E.g. nn.Dropout won’t drop anymore and nn.BatchNorm layers will use the running estimates instead of the batch statistics. The torch.set_grad_enabled line of code makes sure to clear the intermediate values for evaluation, which ar… | 10 | 2019-02-27T15:43:10.878Z | https://discuss.pytorch.org/t/confused-about-set-grad-enabled/38417/2 | model.train() and model.eval() change the behavior of some layers. E.g. nn.Dropout won’t drop anymore and nn.BatchNorm layers will use the running estimates instead of the batch statistics. The torch.set_grad_enabled line of code makes sure to clear the intermediate values for evaluation, which ar… You should add optimizer.step() into your training loop and move scheduler.step() into the epoch loop. That sounds right!
I’m not sure, what S samples are in your example, but here is a small dummy code snippet showing, what I mean:
batch_size = 10
nb_classes = 2
model = nn.Linear(10, nb_classes)
weight = torch.empty(nb_classes).uniform_(0, 1)
criterion = nn.CrossEntropyLoss(weight=weight, reducti… | 2,356 | {'text': ['model.train() and model.eval() change the behavior of some layers. E.g. nn.Dropout won’t drop anymore and nn.BatchNorm layers will use the running estimates instead of the batch statistics. The torch.set_grad_enabled line of code makes sure to clear the intermediate values for evaluation, which ar…'], 'answer_start': [2356]} |
How to use torch.optim.lr_scheduler.ExponentialLR? | I am trying to train a LSTM model in a NLP problem.
I want to use learning rate decay with the torch.optim.lr_scheduler.ExponentialLR class, yet I seem to fail to use it correctly.
My code:
optimizer = torch.optim.Adam(dual_encoder.parameters(), lr = 0.001)
scheduler = torch.optim.lr_scheduler.… | 2 | 2018-01-17T14:43:12.609Z | You should add optimizer.step() into your training loop and move scheduler.step() into the epoch loop. | 9 | 2018-01-17T15:07:25.928Z | https://discuss.pytorch.org/t/how-to-use-torch-optim-lr-scheduler-exponentiallr/12444/2 | model.train() and model.eval() change the behavior of some layers. E.g. nn.Dropout won’t drop anymore and nn.BatchNorm layers will use the running estimates instead of the batch statistics. The torch.set_grad_enabled line of code makes sure to clear the intermediate values for evaluation, which ar… You should add optimizer.step() into your training loop and move scheduler.step() into the epoch loop. That sounds right!
I’m not sure, what S samples are in your example, but here is a small dummy code snippet showing, what I mean:
batch_size = 10
nb_classes = 2
model = nn.Linear(10, nb_classes)
weight = torch.empty(nb_classes).uniform_(0, 1)
criterion = nn.CrossEntropyLoss(weight=weight, reducti… | 1,485 | {'text': ['You should add optimizer.step() into your training loop and move scheduler.step() into the epoch loop.'], 'answer_start': [1485]} |
Per-class and per-sample weighting | How could one do both per-class weighting (probably CrossEntropyLoss) -and- per-sample weighting while training in pytorch?
The use case is classification of individual sections of time series data (think 1000s of sections per recording). The classes are very imbalanced, but given the continuous na… | 4 | 2018-09-19T23:38:07.220Z | That sounds right!
I’m not sure, what S samples are in your example, but here is a small dummy code snippet showing, what I mean:
batch_size = 10
nb_classes = 2
model = nn.Linear(10, nb_classes)
weight = torch.empty(nb_classes).uniform_(0, 1)
criterion = nn.CrossEntropyLoss(weight=weight, reducti… | 15 | 2018-09-20T00:20:47.451Z | https://discuss.pytorch.org/t/per-class-and-per-sample-weighting/25530/4 | model.train() and model.eval() change the behavior of some layers. E.g. nn.Dropout won’t drop anymore and nn.BatchNorm layers will use the running estimates instead of the batch statistics. The torch.set_grad_enabled line of code makes sure to clear the intermediate values for evaluation, which ar… You should add optimizer.step() into your training loop and move scheduler.step() into the epoch loop. That sounds right!
I’m not sure, what S samples are in your example, but here is a small dummy code snippet showing, what I mean:
batch_size = 10
nb_classes = 2
model = nn.Linear(10, nb_classes)
weight = torch.empty(nb_classes).uniform_(0, 1)
criterion = nn.CrossEntropyLoss(weight=weight, reducti… | 410 | {'text': ['That sounds right!\n\nI’m not sure, what S samples are in your example, but here is a small dummy code snippet showing, what I mean:\n\nbatch_size = 10\n\nnb_classes = 2\n\nmodel = nn.Linear(10, nb_classes)\n\nweight = torch.empty(nb_classes).uniform_(0, 1)\n\ncriterion = nn.CrossEntropyLoss(weight=weight, reducti…'], 'answer_start': [410]} |
Accessing intermediate layers of a pretrained network forward? | Hi, I want to get outputs from multiple layers of a pretrained VGG-19 network. I have already done that with this approach, that I found on this board:
class AlexNetConv4(nn.Module):
def __init__(self):
super(AlexNetConv4, self).__init__()
self.features =… | 3 | 2018-01-10T14:28:18.360Z | Here is a example to get output of specified layer in vgg16
<a href="https://github.com/chenyuntc/pytorch-book/blob/master/chapter8-%E9%A3%8E%E6%A0%BC%E8%BF%81%E7%A7%BB(Neural%20Style)/PackedVGG.py" target="_blank" rel="nofollow noopener">chenyuntc/pytorch-book/blob/master/chapter8-风格迁移(Neural Style)/PackedVGG.py</a>
#coding:utf8
import torch
import torch.nn as nn
from torchvision.models import vgg16
from collections import namedtuple
class Vgg16(torch.nn.Module):
… | 13 | 2018-01-10T14:33:13.986Z | https://discuss.pytorch.org/t/accessing-intermediate-layers-of-a-pretrained-network-forward/12113/2 | Here is a example to get output of specified layer in vgg16
<a href="https://github.com/chenyuntc/pytorch-book/blob/master/chapter8-%E9%A3%8E%E6%A0%BC%E8%BF%81%E7%A7%BB(Neural%20Style)/PackedVGG.py" target="_blank" rel="nofollow noopener">chenyuntc/pytorch-book/blob/master/chapter8-风格迁移(Neural Style)/PackedVGG.py</a>
#coding:utf8
import torch
import torch.nn as nn
from torchvision.models import vgg16
from collections import namedtuple
class Vgg16(torch.nn.Module):
… module.training is the boolean you are looking for. :slight_smile: The output layer should have the number of classes as out_features.
Currently your output layer only returns one neuron, which corresponds to class0.
For a binary use case, this should work:
batch_size = 5
nb_classes = 2
in_features = 10
model = nn.Linear(in_features, nb_classes)
criterion = nn.… | 1,442 | {'text': ['Here is a example to get output of specified layer in vgg16\n\n<a href="https://github.com/chenyuntc/pytorch-book/blob/master/chapter8-%E9%A3%8E%E6%A0%BC%E8%BF%81%E7%A7%BB(Neural%20Style)/PackedVGG.py" target="_blank" rel="nofollow noopener">chenyuntc/pytorch-book/blob/master/chapter8-风格迁移(Neural Style)/PackedVGG.py</a>\n\n#coding:utf8\n\nimport torch\n\nimport torch.nn as nn\n\nfrom torchvision.models import vgg16\n\nfrom collections import namedtuple\n\nclass Vgg16(torch.nn.Module):\n\n…'], 'answer_start': [1442]} |
Check if model is eval or train | How can one check is a model is in train or eval state? | 20 | 2017-11-01T23:28:37.744Z | module.training is the boolean you are looking for. :slight_smile: | 64 | 2017-11-01T23:56:32.895Z | https://discuss.pytorch.org/t/check-if-model-is-eval-or-train/9395/2 | Here is a example to get output of specified layer in vgg16
<a href="https://github.com/chenyuntc/pytorch-book/blob/master/chapter8-%E9%A3%8E%E6%A0%BC%E8%BF%81%E7%A7%BB(Neural%20Style)/PackedVGG.py" target="_blank" rel="nofollow noopener">chenyuntc/pytorch-book/blob/master/chapter8-风格迁移(Neural Style)/PackedVGG.py</a>
#coding:utf8
import torch
import torch.nn as nn
from torchvision.models import vgg16
from collections import namedtuple
class Vgg16(torch.nn.Module):
… module.training is the boolean you are looking for. :slight_smile: The output layer should have the number of classes as out_features.
Currently your output layer only returns one neuron, which corresponds to class0.
For a binary use case, this should work:
batch_size = 5
nb_classes = 2
in_features = 10
model = nn.Linear(in_features, nb_classes)
criterion = nn.… | 1,207 | {'text': ['module.training is the boolean you are looking for. :slight_smile:'], 'answer_start': [1207]} |
RuntimeError: Expected object of scalar type Long but got scalar type Float when using CrossEntropyLoss | I have a NN that ends with the following linear layers
dense = nn.Linear(input_size, 1)
if I use CrossEntropyLoss as loss function (as I’m y is supposed to be the class number) I get the following error
RuntimeError Traceback (most recent call last)
<ipython-input-39-… | 0 | 2018-11-26T09:50:46.444Z | The output layer should have the number of classes as out_features.
Currently your output layer only returns one neuron, which corresponds to class0.
For a binary use case, this should work:
batch_size = 5
nb_classes = 2
in_features = 10
model = nn.Linear(in_features, nb_classes)
criterion = nn.… | 26 | 2018-11-26T12:55:15.769Z | https://discuss.pytorch.org/t/runtimeerror-expected-object-of-scalar-type-long-but-got-scalar-type-float-when-using-crossentropyloss/30542/2 | Here is a example to get output of specified layer in vgg16
<a href="https://github.com/chenyuntc/pytorch-book/blob/master/chapter8-%E9%A3%8E%E6%A0%BC%E8%BF%81%E7%A7%BB(Neural%20Style)/PackedVGG.py" target="_blank" rel="nofollow noopener">chenyuntc/pytorch-book/blob/master/chapter8-风格迁移(Neural Style)/PackedVGG.py</a>
#coding:utf8
import torch
import torch.nn as nn
from torchvision.models import vgg16
from collections import namedtuple
class Vgg16(torch.nn.Module):
… module.training is the boolean you are looking for. :slight_smile: The output layer should have the number of classes as out_features.
Currently your output layer only returns one neuron, which corresponds to class0.
For a binary use case, this should work:
batch_size = 5
nb_classes = 2
in_features = 10
model = nn.Linear(in_features, nb_classes)
criterion = nn.… | 553 | {'text': ['The output layer should have the number of classes as out_features.\n\nCurrently your output layer only returns one neuron, which corresponds to class0.\n\nFor a binary use case, this should work:\n\nbatch_size = 5\n\nnb_classes = 2\n\nin_features = 10\n\nmodel = nn.Linear(in_features, nb_classes)\n\ncriterion = nn.…'], 'answer_start': [553]} |
PyTorch with CUDA 11 compatibility | Recently, I installed a ubuntu 20.04 on my system. Since it was a fresh install I decided to upgrade all the software to the latest version. So, Installed Nividia driver 450.51.05 version and CUDA 11.0 version. To my surprise, Pytorch for CUDA 11 has not yet been rolled out.
My question is, should … | 1 | 2020-07-15T04:32:02.127Z | As explained <a href="https://discuss.pytorch.org/t/install-pytorch-with-cuda-11/89219/4">here</a>, the binaries are not built yet with CUDA11. However, the initial CUDA11 enablement PRs are already merged, so that you could install from source using CUDA11.
If you want to use the binaries, you would have to stick to 10.2 for now. | 1 | 2020-07-15T05:25:02.949Z | https://discuss.pytorch.org/t/pytorch-with-cuda-11-compatibility/89254/2 | As explained <a href="https://discuss.pytorch.org/t/install-pytorch-with-cuda-11/89219/4">here</a>, the binaries are not built yet with CUDA11. However, the initial CUDA11 enablement PRs are already merged, so that you could install from source using CUDA11.
If you want to use the binaries, you would have to stick to 10.2 for now. Thanks albanD~
I find a way that it won’t pop up the issue by rearranging the code like the following bellow:
value_loss = 0.5 * mse_loss(imaginated_values, lambda_target_values.detach())
value_optimizer.zero_grad()
action_loss = -1*(lambda_target_values.mean())
action_optimizer.zero_grad()
valu… Good to hear.
Maybe the new pos_weight argument for <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss">nn.BCEwithLogitsLoss</a> might work better, since there is a difference in my weighting approach and the implemented one. See <a href="https://github.com/pytorch/pytorch/issues/5660">this discussion</a>. | 1,728 | {'text': ['As explained <a href="https://discuss.pytorch.org/t/install-pytorch-with-cuda-11/89219/4">here</a>, the binaries are not built yet with CUDA11. However, the initial CUDA11 enablement PRs are already merged, so that you could install from source using CUDA11.\n\nIf you want to use the binaries, you would have to stick to 10.2 for now.'], 'answer_start': [1728]} |
[Solved][Pytorch1.5] RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation | Hi,
I’m facing the issue when I want to do backward() with 2 models, action_model and value_model. I’ve already searched related topic. They said that ‘pytorch 1.15’ always automatically check the ‘inplace’ when using backward(). However, it still report the same problem. How can I do backward() w… | 2 | 2020-07-23T09:52:12.631Z | Thanks albanD~
I find a way that it won’t pop up the issue by rearranging the code like the following bellow:
value_loss = 0.5 * mse_loss(imaginated_values, lambda_target_values.detach())
value_optimizer.zero_grad()
action_loss = -1*(lambda_target_values.mean())
action_optimizer.zero_grad()
valu… | 16 | 2020-07-24T07:47:22.091Z | https://discuss.pytorch.org/t/solved-pytorch1-5-runtimeerror-one-of-the-variables-needed-for-gradient-computation-has-been-modified-by-an-inplace-operation/90256/4 | As explained <a href="https://discuss.pytorch.org/t/install-pytorch-with-cuda-11/89219/4">here</a>, the binaries are not built yet with CUDA11. However, the initial CUDA11 enablement PRs are already merged, so that you could install from source using CUDA11.
If you want to use the binaries, you would have to stick to 10.2 for now. Thanks albanD~
I find a way that it won’t pop up the issue by rearranging the code like the following bellow:
value_loss = 0.5 * mse_loss(imaginated_values, lambda_target_values.detach())
value_optimizer.zero_grad()
action_loss = -1*(lambda_target_values.mean())
action_optimizer.zero_grad()
valu… Good to hear.
Maybe the new pos_weight argument for <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss">nn.BCEwithLogitsLoss</a> might work better, since there is a difference in my weighting approach and the implemented one. See <a href="https://github.com/pytorch/pytorch/issues/5660">this discussion</a>. | 1,198 | {'text': ['Thanks albanD~\n\nI find a way that it won’t pop up the issue by rearranging the code like the following bellow:\n\nvalue_loss = 0.5 * mse_loss(imaginated_values, lambda_target_values.detach())\n\nvalue_optimizer.zero_grad()\n\naction_loss = -1*(lambda_target_values.mean())\n\naction_optimizer.zero_grad()\n\nvalu…'], 'answer_start': [1198]} |
Unclear about Weighted BCE Loss | Hey there super people!
I am having issues understanding the BCELoss weight parameter. I am having a binary classification issue, I have an RNN which for each time step over a sequence produces a binary classification. Precisely, it produces an output of size (batch, sequence_len) where each elemen… | 4 | 2018-07-21T16:30:20.993Z | Good to hear.
Maybe the new pos_weight argument for <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss">nn.BCEwithLogitsLoss</a> might work better, since there is a difference in my weighting approach and the implemented one. See <a href="https://github.com/pytorch/pytorch/issues/5660">this discussion</a>. | 2 | 2018-07-30T06:30:51.193Z | https://discuss.pytorch.org/t/unclear-about-weighted-bce-loss/21486/4 | As explained <a href="https://discuss.pytorch.org/t/install-pytorch-with-cuda-11/89219/4">here</a>, the binaries are not built yet with CUDA11. However, the initial CUDA11 enablement PRs are already merged, so that you could install from source using CUDA11.
If you want to use the binaries, you would have to stick to 10.2 for now. Thanks albanD~
I find a way that it won’t pop up the issue by rearranging the code like the following bellow:
value_loss = 0.5 * mse_loss(imaginated_values, lambda_target_values.detach())
value_optimizer.zero_grad()
action_loss = -1*(lambda_target_values.mean())
action_optimizer.zero_grad()
valu… Good to hear.
Maybe the new pos_weight argument for <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss">nn.BCEwithLogitsLoss</a> might work better, since there is a difference in my weighting approach and the implemented one. See <a href="https://github.com/pytorch/pytorch/issues/5660">this discussion</a>. | 645 | {'text': ['Good to hear.\n\nMaybe the new pos_weight argument for <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss">nn.BCEwithLogitsLoss</a> might work better, since there is a difference in my weighting approach and the implemented one. See <a href="https://github.com/pytorch/pytorch/issues/5660">this discussion</a>.'], 'answer_start': [645]} |
DataLoader, when num_worker >0, there is bug | import h5py
import numpy as np
import torch
from torch.utils.data import Dataset, DataLoader
class H5Dataset(Dataset):
def __init__(self, h5_path):
self.h5_path = h5_path
self.h5_file = h5py.File(h5_path, 'r')
self.length = len(h5py.File(h5_path, 'r'))
def __ge… | 2 | 2018-09-21T06:29:58.470Z | So I investigated it further and in deed opening HDF5 introduces huge overhead. I’ve tested it on this code: <a href="https://github.com/piojanu/World-Models" rel="nofollow noopener">https://github.com/piojanu/World-Models</a> (my implementation of the World Models (further WM) paper, the memory training is written in PyTorch). Note: the code I link here doesn’t have multipro… | 38 | 2019-02-05T11:42:04.974Z | https://discuss.pytorch.org/t/dataloader-when-num-worker-0-there-is-bug/25643/16 | So I investigated it further and in deed opening HDF5 introduces huge overhead. I’ve tested it on this code: <a href="https://github.com/piojanu/World-Models" rel="nofollow noopener">https://github.com/piojanu/World-Models</a> (my implementation of the World Models (further WM) paper, the memory training is written in PyTorch). Note: the code I link here doesn’t have multipro… It is just a look up table from indices to vectors. You can manually initialize them however you want, e.g. to work2vec weights. Additionally to what <a class="mention" href="/u/royboy">@royboy</a> said, you need to push your criterion to the GPU, if it’s stateful, i.e. if it has some parameters or internal states.
Usually loss functions are just functional so that it is not necessary. | 1,956 | {'text': ['So I investigated it further and in deed opening HDF5 introduces huge overhead. I’ve tested it on this code: <a href="https://github.com/piojanu/World-Models" rel="nofollow noopener">https://github.com/piojanu/World-Models</a> (my implementation of the World Models (further WM) paper, the memory training is written in PyTorch). Note: the code I link here doesn’t have multipro…'], 'answer_start': [1956]} |
What is nn.embedding exactly doing? | I was wondering what kind of embedding is used in the embedding function provided by pytorch. It’s not clear what is actually happening. For example is a pre-trained embedding being used to project the word tokens to its hypothetical space? Is there a distance measure being used? Or is it embedding … | 4 | 2018-01-19T01:53:58.776Z | It is just a look up table from indices to vectors. You can manually initialize them however you want, e.g. to work2vec weights. | 5 | 2018-01-19T02:07:21.512Z | https://discuss.pytorch.org/t/what-is-nn-embedding-exactly-doing/12521/2 | So I investigated it further and in deed opening HDF5 introduces huge overhead. I’ve tested it on this code: <a href="https://github.com/piojanu/World-Models" rel="nofollow noopener">https://github.com/piojanu/World-Models</a> (my implementation of the World Models (further WM) paper, the memory training is written in PyTorch). Note: the code I link here doesn’t have multipro… It is just a look up table from indices to vectors. You can manually initialize them however you want, e.g. to work2vec weights. Additionally to what <a class="mention" href="/u/royboy">@royboy</a> said, you need to push your criterion to the GPU, if it’s stateful, i.e. if it has some parameters or internal states.
Usually loss functions are just functional so that it is not necessary. | 1,365 | {'text': ['It is just a look up table from indices to vectors. You can manually initialize them however you want, e.g. to work2vec weights.'], 'answer_start': [1365]} |
Move the loss function to GPU | Hi, every one,
I have a question about the “.cuda()”. In an example of Pytorch, I saw that there were the code like this:
criterion = nn.CrossEntropyLoss().cuda()
In my code, I don’t do this. So I am wondering if it necessary to move the loss function to the GPU.
Thanks | 3 | 2018-06-21T14:36:39.506Z | Additionally to what <a class="mention" href="/u/royboy">@royboy</a> said, you need to push your criterion to the GPU, if it’s stateful, i.e. if it has some parameters or internal states.
Usually loss functions are just functional so that it is not necessary. | 10 | 2018-06-21T20:48:01.660Z | https://discuss.pytorch.org/t/move-the-loss-function-to-gpu/20060/3 | So I investigated it further and in deed opening HDF5 introduces huge overhead. I’ve tested it on this code: <a href="https://github.com/piojanu/World-Models" rel="nofollow noopener">https://github.com/piojanu/World-Models</a> (my implementation of the World Models (further WM) paper, the memory training is written in PyTorch). Note: the code I link here doesn’t have multipro… It is just a look up table from indices to vectors. You can manually initialize them however you want, e.g. to work2vec weights. Additionally to what <a class="mention" href="/u/royboy">@royboy</a> said, you need to push your criterion to the GPU, if it’s stateful, i.e. if it has some parameters or internal states.
Usually loss functions are just functional so that it is not necessary. | 516 | {'text': ['Additionally to what <a class="mention" href="/u/royboy">@royboy</a> said, you need to push your criterion to the GPU, if it’s stateful, i.e. if it has some parameters or internal states.\n\nUsually loss functions are just functional so that it is not necessary.'], 'answer_start': [516]} |
Dropout at test time in densenet | I have fine-tuned the pre-trained densenet121 pytorch model with dropout rate of 0.2.
Now, is there any way I can use dropout while testing an individual image?
The purpose is to pass a single image multiple times through the learned network (with dropout) and calculate mean/variance on the output… | 3 | 2017-08-25T20:43:05.041Z | you can set your whole network to .eval() mode, but then set your dropout layers to .train() mode.
You can use the apply function to achieve this for example:
<a href="http://pytorch.org/docs/master/nn.html#torch.nn.Module.apply" class="onebox" target="_blank">http://pytorch.org/docs/master/nn.html#torch.nn.Module.apply</a> | 8 | 2017-08-28T01:25:18.335Z | https://discuss.pytorch.org/t/dropout-at-test-time-in-densenet/6738/2 | you can set your whole network to .eval() mode, but then set your dropout layers to .train() mode.
You can use the apply function to achieve this for example:
<a href="http://pytorch.org/docs/master/nn.html#torch.nn.Module.apply" class="onebox" target="_blank">http://pytorch.org/docs/master/nn.html#torch.nn.Module.apply</a> You can deepcopy a model:
model = nn.Linear(1, 1)
model_copy = copy.deepcopy(model)
with torch.no_grad():
model.weight.fill_(1.)
print(model.weight)
> Parameter containing:
tensor([[10.]], requires_grad=True)
print(model_copy.weight)
> Parameter containing:
tensor([[-0.5596]], requires_grad=… Actually, with ONNX-Caffe2 package, you can easily turn an ONNX model to a Caffe2 model, then dump it into pb files.
Here is an example:
import onnx
from onnx_caffe2.backend import Caffe2Backend
onnx_proto_file = "/onnx.proto"
torch.onnx.export(G, x, onnx_proto_file, verbose=True)
onnx_model = on… | 1,552 | {'text': ['you can set your whole network to .eval() mode, but then set your dropout layers to .train() mode.\n\nYou can use the apply function to achieve this for example:\n\n<a href="http://pytorch.org/docs/master/nn.html#torch.nn.Module.apply" class="onebox" target="_blank">http://pytorch.org/docs/master/nn.html#torch.nn.Module.apply</a>'], 'answer_start': [1552]} |
Can I deepcopy a model? | There is some chatter online that I can’t deepcopy a model… Is this right?
Additionally, is there a way after loading a model to move it between cpu and gpu? | 3 | 2019-07-31T12:40:45.979Z | You can deepcopy a model:
model = nn.Linear(1, 1)
model_copy = copy.deepcopy(model)
with torch.no_grad():
model.weight.fill_(1.)
print(model.weight)
> Parameter containing:
tensor([[10.]], requires_grad=True)
print(model_copy.weight)
> Parameter containing:
tensor([[-0.5596]], requires_grad=… | 10 | 2019-07-31T12:58:00.454Z | https://discuss.pytorch.org/t/can-i-deepcopy-a-model/52192/2 | you can set your whole network to .eval() mode, but then set your dropout layers to .train() mode.
You can use the apply function to achieve this for example:
<a href="http://pytorch.org/docs/master/nn.html#torch.nn.Module.apply" class="onebox" target="_blank">http://pytorch.org/docs/master/nn.html#torch.nn.Module.apply</a> You can deepcopy a model:
model = nn.Linear(1, 1)
model_copy = copy.deepcopy(model)
with torch.no_grad():
model.weight.fill_(1.)
print(model.weight)
> Parameter containing:
tensor([[10.]], requires_grad=True)
print(model_copy.weight)
> Parameter containing:
tensor([[-0.5596]], requires_grad=… Actually, with ONNX-Caffe2 package, you can easily turn an ONNX model to a Caffe2 model, then dump it into pb files.
Here is an example:
import onnx
from onnx_caffe2.backend import Caffe2Backend
onnx_proto_file = "/onnx.proto"
torch.onnx.export(G, x, onnx_proto_file, verbose=True)
onnx_model = on… | 1,104 | {'text': ['You can deepcopy a model:\n\nmodel = nn.Linear(1, 1)\n\nmodel_copy = copy.deepcopy(model)\n\nwith torch.no_grad():\n\nmodel.weight.fill_(1.)\n\nprint(model.weight)\n\n> Parameter containing:\n\ntensor([[10.]], requires_grad=True)\n\nprint(model_copy.weight)\n\n> Parameter containing:\n\ntensor([[-0.5596]], requires_grad=…'], 'answer_start': [1104]} |
ONNX: deploying a trained model in a C++ project | I expect that most people are using ONNX to transfer trained models from Pytorch to Caffe2 because they want to deploy their model as part of a C/C++ project. However, there are no examples which show how to do this from beginning to end.
From the Pytorch documentation <a href="http://pytorch.org/docs/master/onnx.html" rel="nofollow noopener">here</a>, I understand how to co… | 6 | 2017-11-07T05:25:21.060Z | Actually, with ONNX-Caffe2 package, you can easily turn an ONNX model to a Caffe2 model, then dump it into pb files.
Here is an example:
import onnx
from onnx_caffe2.backend import Caffe2Backend
onnx_proto_file = "/onnx.proto"
torch.onnx.export(G, x, onnx_proto_file, verbose=True)
onnx_model = on… | 3 | 2017-11-09T19:38:56.288Z | https://discuss.pytorch.org/t/onnx-deploying-a-trained-model-in-a-c-project/9593/3 | you can set your whole network to .eval() mode, but then set your dropout layers to .train() mode.
You can use the apply function to achieve this for example:
<a href="http://pytorch.org/docs/master/nn.html#torch.nn.Module.apply" class="onebox" target="_blank">http://pytorch.org/docs/master/nn.html#torch.nn.Module.apply</a> You can deepcopy a model:
model = nn.Linear(1, 1)
model_copy = copy.deepcopy(model)
with torch.no_grad():
model.weight.fill_(1.)
print(model.weight)
> Parameter containing:
tensor([[10.]], requires_grad=True)
print(model_copy.weight)
> Parameter containing:
tensor([[-0.5596]], requires_grad=… Actually, with ONNX-Caffe2 package, you can easily turn an ONNX model to a Caffe2 model, then dump it into pb files.
Here is an example:
import onnx
from onnx_caffe2.backend import Caffe2Backend
onnx_proto_file = "/onnx.proto"
torch.onnx.export(G, x, onnx_proto_file, verbose=True)
onnx_model = on… | 645 | {'text': ['Actually, with ONNX-Caffe2 package, you can easily turn an ONNX model to a Caffe2 model, then dump it into pb files.\n\nHere is an example:\n\nimport onnx\n\nfrom onnx_caffe2.backend import Caffe2Backend\n\nonnx_proto_file = "/onnx.proto"\n\ntorch.onnx.export(G, x, onnx_proto_file, verbose=True)\n\nonnx_model = on…'], 'answer_start': [645]} |
Why is pytorch's GPU utilization so low in production ( NOT training )? | utilization ( which you can check using nvidia-smi) – defined in this <a href="https://stackoverflow.com/questions/40937894/nvidia-smi-volatile-gpu-utilization-explanation/40938696#40938696" rel="nofollow noopener">link</a> is not how well a process is using the GPU resources. Please read the definition if you aren’t sure.
Why is GPU utilization so low for codes written in Pytorch ( averages around 30% ) ? Does pytorch create unnecessary work f… | 4 | 2019-02-27T05:09:41.900Z | In this <a href="https://github.com/NVIDIA/tacotron2/issues/183" rel="nofollow noopener">issue</a>, a dev from nvidia explains why this problem is occuring. Essentially, the asnwer is: pytorch is not optimized well plus the nature of Tacotron2’s network architecture produced this low nvidia-smi utilization. It is not a bug. | 2 | 2019-04-16T06:49:30.345Z | https://discuss.pytorch.org/t/why-is-pytorchs-gpu-utilization-so-low-in-production-not-training/38366/35 | In this <a href="https://github.com/NVIDIA/tacotron2/issues/183" rel="nofollow noopener">issue</a>, a dev from nvidia explains why this problem is occuring. Essentially, the asnwer is: pytorch is not optimized well plus the nature of Tacotron2’s network architecture produced this low nvidia-smi utilization. It is not a bug. Would you like to lower the learning rate to its minimum in each epoch and then restart from the base learning rate?
If so, you could try the following code:
model = nn.Linear(10, 2)
optimizer = optim.SGD(model.parameters(), lr=1.)
steps = 10
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimi… You would need to set requires_grad=True for the weights and it would also work as nn.Conv2d internally just calls the functional API, see <a href="https://github.com/pytorch/pytorch/blob/1f94ce1f97d95e0d20540accea3671ee8ff2dec3/torch/nn/modules/conv.py#L311">here</a>. :wink:
However, if you prefer to use the module, you could try the following code:
weights = ...
conv = nn.Conv2d(nb_channels, 1, 3, bias=False)
with tor… | 1,932 | {'text': ['In this <a href="https://github.com/NVIDIA/tacotron2/issues/183" rel="nofollow noopener">issue</a>, a dev from nvidia explains why this problem is occuring. Essentially, the asnwer is: pytorch is not optimized well plus the nature of Tacotron2’s network architecture produced this low nvidia-smi utilization. It is not a bug.'], 'answer_start': [1932]} |
How to implement torch.optim.lr_scheduler.CosineAnnealingLR? | Hi,
I am trying to implement SGDR in my training but I am not sure how to implement it in PyTorch.
I want the learning rate to reset every epoch.
Here is my code:
model = ConvolutionalAutoEncoder().to(device)
# model = nn.DataParallel(model)
# Loss and optimizer
learning_rate = 0.1
weight_decay … | 1 | 2018-11-05T09:02:20.857Z | Would you like to lower the learning rate to its minimum in each epoch and then restart from the base learning rate?
If so, you could try the following code:
model = nn.Linear(10, 2)
optimizer = optim.SGD(model.parameters(), lr=1.)
steps = 10
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimi… | 31 | 2018-11-05T12:27:55.203Z | https://discuss.pytorch.org/t/how-to-implement-torch-optim-lr-scheduler-cosineannealinglr/28797/6 | In this <a href="https://github.com/NVIDIA/tacotron2/issues/183" rel="nofollow noopener">issue</a>, a dev from nvidia explains why this problem is occuring. Essentially, the asnwer is: pytorch is not optimized well plus the nature of Tacotron2’s network architecture produced this low nvidia-smi utilization. It is not a bug. Would you like to lower the learning rate to its minimum in each epoch and then restart from the base learning rate?
If so, you could try the following code:
model = nn.Linear(10, 2)
optimizer = optim.SGD(model.parameters(), lr=1.)
steps = 10
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimi… You would need to set requires_grad=True for the weights and it would also work as nn.Conv2d internally just calls the functional API, see <a href="https://github.com/pytorch/pytorch/blob/1f94ce1f97d95e0d20540accea3671ee8ff2dec3/torch/nn/modules/conv.py#L311">here</a>. :wink:
However, if you prefer to use the module, you could try the following code:
weights = ...
conv = nn.Conv2d(nb_channels, 1, 3, bias=False)
with tor… | 1,292 | {'text': ['Would you like to lower the learning rate to its minimum in each epoch and then restart from the base learning rate?\n\nIf so, you could try the following code:\n\nmodel = nn.Linear(10, 2)\n\noptimizer = optim.SGD(model.parameters(), lr=1.)\n\nsteps = 10\n\nscheduler = optim.lr_scheduler.CosineAnnealingLR(optimi…'], 'answer_start': [1292]} |
Setting custom kernel for CNN in pytorch | Is there a way to specify our own custom kernel values for a convolution neural network in pytorch? Something like <a href="https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/layers/conv2d_transpose" rel="nofollow noopener">kernel_initialiser</a> in tensorflow? Eg. I want a 3x3 kernel in nn.Conv2d with initialization so that it acts as a identity kernel -
0 0 0
0 1 0
0 0 0
(this will effectively return the… | 3 | 2018-10-13T11:37:39.014Z | You would need to set requires_grad=True for the weights and it would also work as nn.Conv2d internally just calls the functional API, see <a href="https://github.com/pytorch/pytorch/blob/1f94ce1f97d95e0d20540accea3671ee8ff2dec3/torch/nn/modules/conv.py#L311">here</a>. :wink:
However, if you prefer to use the module, you could try the following code:
weights = ...
conv = nn.Conv2d(nb_channels, 1, 3, bias=False)
with tor… | 8 | 2018-10-13T12:07:34.882Z | https://discuss.pytorch.org/t/setting-custom-kernel-for-cnn-in-pytorch/27176/4 | In this <a href="https://github.com/NVIDIA/tacotron2/issues/183" rel="nofollow noopener">issue</a>, a dev from nvidia explains why this problem is occuring. Essentially, the asnwer is: pytorch is not optimized well plus the nature of Tacotron2’s network architecture produced this low nvidia-smi utilization. It is not a bug. Would you like to lower the learning rate to its minimum in each epoch and then restart from the base learning rate?
If so, you could try the following code:
model = nn.Linear(10, 2)
optimizer = optim.SGD(model.parameters(), lr=1.)
steps = 10
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimi… You would need to set requires_grad=True for the weights and it would also work as nn.Conv2d internally just calls the functional API, see <a href="https://github.com/pytorch/pytorch/blob/1f94ce1f97d95e0d20540accea3671ee8ff2dec3/torch/nn/modules/conv.py#L311">here</a>. :wink:
However, if you prefer to use the module, you could try the following code:
weights = ...
conv = nn.Conv2d(nb_channels, 1, 3, bias=False)
with tor… | 638 | {'text': ['You would need to set requires_grad=True for the weights and it would also work as nn.Conv2d internally just calls the functional API, see <a href="https://github.com/pytorch/pytorch/blob/1f94ce1f97d95e0d20540accea3671ee8ff2dec3/torch/nn/modules/conv.py#L311">here</a>. :wink:\n\nHowever, if you prefer to use the module, you could try the following code:\n\nweights = ...\n\nconv = nn.Conv2d(nb_channels, 1, 3, bias=False)\n\nwith tor…'], 'answer_start': [638]} |
Resize tensor without converting to PIL image? | I have 6-channel images (512x512x6) that I would like to resize while preserving the 6-channels (say to 128x128x6). torchvision.transforms.Resize expects a PIL image in input but I cannot (& do not want to) convert my images to PIL. Any idea how to do this within torchvision transforms (i.e. without… | 2 | 2019-08-02T09:04:49.316Z | Hi,
You can do it using <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.functional.interpolate" rel="nofollow noopener">interpolate</a> function and it supports different methods.
Here is an example:
import PIL.Image as Image
from torchvision.transforms import ToTensor, ToPILImage
import torch.nn.functional as F
img = Image.open('data/Places365_val_00000001.jpg')
img = ToTensor()(img)
out = F… | 16 | 2019-08-02T10:11:05.879Z | https://discuss.pytorch.org/t/resize-tensor-without-converting-to-pil-image/52401/2 | Hi,
You can do it using <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.functional.interpolate" rel="nofollow noopener">interpolate</a> function and it supports different methods.
Here is an example:
import PIL.Image as Image
from torchvision.transforms import ToTensor, ToPILImage
import torch.nn.functional as F
img = Image.open('data/Places365_val_00000001.jpg')
img = ToTensor()(img)
out = F… Hi,
This difference is that instantiating + calling the Function works with “old style” functions (which are going to be deprecated in the future).
Using .apply is for the “new style” functions. You can differentiate the two easily: new style functions are defined with only @staticmethod, while ol… Alternatively to <a class="mention" href="/u/alband">@albanD</a>’s solution, you could also use <a href="https://pytorch.org/docs/stable/torchvision/datasets.html#datasetfolder">DatasetFolder</a>, which basically is the underlying class of ImageFolder.
Using this class you can provide your own files extensions and loader to load the samples.
def npy_loader(path):
sample = torch.from_numpy(np.load(path))
return sa… | 2,146 | {'text': ['Hi,\n\nYou can do it using <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.functional.interpolate" rel="nofollow noopener">interpolate</a> function and it supports different methods.\n\nHere is an example:\n\nimport PIL.Image as Image\n\nfrom torchvision.transforms import ToTensor, ToPILImage\n\nimport torch.nn.functional as F\n\nimg = Image.open('data/Places365_val_00000001.jpg')\n\nimg = ToTensor()(img)\n\nout = F…'], 'answer_start': [2146]} |
Difference between apply an call for an autograd function | I created a simple autograd function, let’s call it F (based on torch.autograd.Function).
What’s the difference between calling
a = F.apply(args)
and instantiating, then calling, like this :
f = F()
a = f(args)
The two versions seem to be used in pytorch code, and in examples | 4 | 2018-02-20T14:25:38.317Z | Hi,
This difference is that instantiating + calling the Function works with “old style” functions (which are going to be deprecated in the future).
Using .apply is for the “new style” functions. You can differentiate the two easily: new style functions are defined with only @staticmethod, while ol… | 10 | 2018-02-20T14:27:33.485Z | https://discuss.pytorch.org/t/difference-between-apply-an-call-for-an-autograd-function/13845/2 | Hi,
You can do it using <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.functional.interpolate" rel="nofollow noopener">interpolate</a> function and it supports different methods.
Here is an example:
import PIL.Image as Image
from torchvision.transforms import ToTensor, ToPILImage
import torch.nn.functional as F
img = Image.open('data/Places365_val_00000001.jpg')
img = ToTensor()(img)
out = F… Hi,
This difference is that instantiating + calling the Function works with “old style” functions (which are going to be deprecated in the future).
Using .apply is for the “new style” functions. You can differentiate the two easily: new style functions are defined with only @staticmethod, while ol… Alternatively to <a class="mention" href="/u/alband">@albanD</a>’s solution, you could also use <a href="https://pytorch.org/docs/stable/torchvision/datasets.html#datasetfolder">DatasetFolder</a>, which basically is the underlying class of ImageFolder.
Using this class you can provide your own files extensions and loader to load the samples.
def npy_loader(path):
sample = torch.from_numpy(np.load(path))
return sa… | 1,503 | {'text': ['Hi,\n\nThis difference is that instantiating + calling the Function works with “old style” functions (which are going to be deprecated in the future).\n\nUsing .apply is for the “new style” functions. You can differentiate the two easily: new style functions are defined with only @staticmethod, while ol…'], 'answer_start': [1503]} |
Loading .npy files using torchvision | Dear all,
I am trying to train my own Resnet model using .npy format files.
I am wondering that are there any functions like torchvision.datasets.ImageFolder that can load .npy files in a folder and label these numpy array with their folder name? | 4 | 2018-11-01T00:08:12.598Z | Alternatively to <a class="mention" href="/u/alband">@albanD</a>’s solution, you could also use <a href="https://pytorch.org/docs/stable/torchvision/datasets.html#datasetfolder">DatasetFolder</a>, which basically is the underlying class of ImageFolder.
Using this class you can provide your own files extensions and loader to load the samples.
def npy_loader(path):
sample = torch.from_numpy(np.load(path))
return sa… | 12 | 2018-11-01T12:32:20.524Z | https://discuss.pytorch.org/t/loading-npy-files-using-torchvision/28481/3 | Hi,
You can do it using <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.functional.interpolate" rel="nofollow noopener">interpolate</a> function and it supports different methods.
Here is an example:
import PIL.Image as Image
from torchvision.transforms import ToTensor, ToPILImage
import torch.nn.functional as F
img = Image.open('data/Places365_val_00000001.jpg')
img = ToTensor()(img)
out = F… Hi,
This difference is that instantiating + calling the Function works with “old style” functions (which are going to be deprecated in the future).
Using .apply is for the “new style” functions. You can differentiate the two easily: new style functions are defined with only @staticmethod, while ol… Alternatively to <a class="mention" href="/u/alband">@albanD</a>’s solution, you could also use <a href="https://pytorch.org/docs/stable/torchvision/datasets.html#datasetfolder">DatasetFolder</a>, which basically is the underlying class of ImageFolder.
Using this class you can provide your own files extensions and loader to load the samples.
def npy_loader(path):
sample = torch.from_numpy(np.load(path))
return sa… | 739 | {'text': ['Alternatively to <a class="mention" href="/u/alband">@albanD</a>’s solution, you could also use <a href="https://pytorch.org/docs/stable/torchvision/datasets.html#datasetfolder">DatasetFolder</a>, which basically is the underlying class of ImageFolder.\n\nUsing this class you can provide your own files extensions and loader to load the samples.\n\ndef npy_loader(path):\n\nsample = torch.from_numpy(np.load(path))\n\nreturn sa…'], 'answer_start': [739]} |
Concatenate layer output with additional input data | I want to build a CNN model that takes additional input data besides the image at a certain layer.
To do that, I plan to use a standard CNN model, take one of its last FC layers, concatenate it with the additional input data and add FC layers processing both inputs.
[Dibujo%20sin%20t%C3%ADtulo]
T… | 2 | 2018-06-29T11:23:27.570Z | Here is a small example for your use case:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.cnn = models.inception_v3(pretrained=False, aux_logits=False)
self.cnn.fc = nn.Linear(
self.cnn.fc.in_features, 20)
… | 32 | 2018-06-29T11:38:52.007Z | https://discuss.pytorch.org/t/concatenate-layer-output-with-additional-input-data/20462/2 | Here is a small example for your use case:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.cnn = models.inception_v3(pretrained=False, aux_logits=False)
self.cnn.fc = nn.Linear(
self.cnn.fc.in_features, 20)
… Use the basic knowledge of software engineering.
class MultipleOptimizer(object):
def __init__(*op):
self.optimizers = op
def zero_grad(self):
for op in self.optimizers:
op.zero_grad()
def step(self):
for op in self.optimizers:
op.step(… In that case you should use <a href="https://pytorch.org/docs/stable/nn.html?highlight=register_buffer#torch.nn.Module.register_buffer">register_buffer</a>. | 2,334 | {'text': ['Here is a small example for your use case:\n\nclass MyModel(nn.Module):\n\ndef __init__(self):\n\nsuper(MyModel, self).__init__()\n\nself.cnn = models.inception_v3(pretrained=False, aux_logits=False)\n\nself.cnn.fc = nn.Linear(\n\nself.cnn.fc.in_features, 20)\n\n…'], 'answer_start': [2334]} |
Two optimizers for one model | Is there a way to use two optimizers for one model in more beautiful way?
Now, as I understand we should do something like these:
net = Model()
part_1_parameters = ...
part_2_parameters = ...
opt1 = optimizer_1(part_1_parameters)
opt2 = optimizer_2(part_1_parameters)
### train epoch
opt1.zero_gra… | 3 | 2017-12-12T23:46:03.031Z | Use the basic knowledge of software engineering.
class MultipleOptimizer(object):
def __init__(*op):
self.optimizers = op
def zero_grad(self):
for op in self.optimizers:
op.zero_grad()
def step(self):
for op in self.optimizers:
op.step(… | 23 | 2017-12-13T06:13:25.284Z | https://discuss.pytorch.org/t/two-optimizers-for-one-model/11085/7 | Here is a small example for your use case:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.cnn = models.inception_v3(pretrained=False, aux_logits=False)
self.cnn.fc = nn.Linear(
self.cnn.fc.in_features, 20)
… Use the basic knowledge of software engineering.
class MultipleOptimizer(object):
def __init__(*op):
self.optimizers = op
def zero_grad(self):
for op in self.optimizers:
op.zero_grad()
def step(self):
for op in self.optimizers:
op.step(… In that case you should use <a href="https://pytorch.org/docs/stable/nn.html?highlight=register_buffer#torch.nn.Module.register_buffer">register_buffer</a>. | 1,425 | {'text': ['Use the basic knowledge of software engineering.\n\nclass MultipleOptimizer(object):\n\ndef __init__(*op):\n\nself.optimizers = op\n\ndef zero_grad(self):\n\nfor op in self.optimizers:\n\nop.zero_grad()\n\ndef step(self):\n\nfor op in self.optimizers:\n\nop.step(…'], 'answer_start': [1425]} |
Why model.to(device) wouldn't put tensors on a custom layer to the same device? | Currently, I have to pass a device parameter into my custom layer and then manually put tensors onto the specified device manually using .to(device) or device=device.
Is this behavior expected? It looks kind of ugly to me.
Shouldn’t model.to(device) put all the layers, including my custom layer, t… | 1 | 2018-05-12T03:02:57.421Z | In that case you should use <a href="https://pytorch.org/docs/stable/nn.html?highlight=register_buffer#torch.nn.Module.register_buffer">register_buffer</a>. | 7 | 2018-05-13T09:00:44.862Z | https://discuss.pytorch.org/t/why-model-to-device-wouldnt-put-tensors-on-a-custom-layer-to-the-same-device/17964/8 | Here is a small example for your use case:
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.cnn = models.inception_v3(pretrained=False, aux_logits=False)
self.cnn.fc = nn.Linear(
self.cnn.fc.in_features, 20)
… Use the basic knowledge of software engineering.
class MultipleOptimizer(object):
def __init__(*op):
self.optimizers = op
def zero_grad(self):
for op in self.optimizers:
op.zero_grad()
def step(self):
for op in self.optimizers:
op.step(… In that case you should use <a href="https://pytorch.org/docs/stable/nn.html?highlight=register_buffer#torch.nn.Module.register_buffer">register_buffer</a>. | 512 | {'text': ['In that case you should use <a href="https://pytorch.org/docs/stable/nn.html?highlight=register_buffer#torch.nn.Module.register_buffer">register_buffer</a>.'], 'answer_start': [512]} |
How to resize and pad in a torchvision.transforms.Compose()? | I’m creating a torchvision.datasets.ImageFolder() data loader, adding torchvision.transforms steps for preprocessing each image inside my training/validation datasets.
My main issue is that each image from training/validation has a different size (i.e.: 224x400, 150x300, 300x150, 224x224 etc). Sinc… | 1 | 2020-03-03T14:38:28.498Z | I think I did something similar where I kept all ratios by making the each making all the images the width, and height of the biggest image. Then set the image in the center and pad the empty spaces. I don’t know if there’s an function that does this automatically but I did it myself. but I made fun… | 1 | 2020-03-03T23:15:04.935Z | https://discuss.pytorch.org/t/how-to-resize-and-pad-in-a-torchvision-transforms-compose/71850/2 | I think I did something similar where I kept all ratios by making the each making all the images the width, and height of the biggest image. Then set the image in the center and pad the empty spaces. I don’t know if there’s an function that does this automatically but I did it myself. but I made fun… Hi! I hope it’s not too late.
I had found this link pertaining to details regarding vgg-face model along with its weights in the link below. Scroll down to the vgg-face section and download your requirements.
<a href="http://www.robots.ox.ac.uk/~albanie/pytorch-models.html" class="onebox" target="_blank" rel="nofollow noopener">http://www.robots.ox.ac.uk/~albanie/pytorch-models.html</a>
Hope this helps. <a class="mention" href="/u/leo-mao">@leo-mao</a>, you should not set world_size and rank in torch.distributed.init_process_group, they are automatically set by torch.distributed.launch.
So please change that to dist.init_process_group(backend=backend, init_method=“env://”)
Also, you should not set WORLD_SIZE, RANK env variables in your … | 1,336 | {'text': ['I think I did something similar where I kept all ratios by making the each making all the images the width, and height of the biggest image. Then set the image in the center and pad the empty spaces. I don’t know if there’s an function that does this automatically but I did it myself. but I made fun…'], 'answer_start': [1336]} |
Pretrained VGG-Face model | I have searched for vgg-face pretrained model in pytorch, but couldn’t find it. Is there a github repo for the pretrained model of vgg-face in pytorch? | 4 | 2017-11-01T16:02:02.662Z | Hi! I hope it’s not too late.
I had found this link pertaining to details regarding vgg-face model along with its weights in the link below. Scroll down to the vgg-face section and download your requirements.
<a href="http://www.robots.ox.ac.uk/~albanie/pytorch-models.html" class="onebox" target="_blank" rel="nofollow noopener">http://www.robots.ox.ac.uk/~albanie/pytorch-models.html</a>
Hope this helps. | 7 | 2018-07-27T15:32:00.479Z | https://discuss.pytorch.org/t/pretrained-vgg-face-model/9383/2 | I think I did something similar where I kept all ratios by making the each making all the images the width, and height of the biggest image. Then set the image in the center and pad the empty spaces. I don’t know if there’s an function that does this automatically but I did it myself. but I made fun… Hi! I hope it’s not too late.
I had found this link pertaining to details regarding vgg-face model along with its weights in the link below. Scroll down to the vgg-face section and download your requirements.
<a href="http://www.robots.ox.ac.uk/~albanie/pytorch-models.html" class="onebox" target="_blank" rel="nofollow noopener">http://www.robots.ox.ac.uk/~albanie/pytorch-models.html</a>
Hope this helps. <a class="mention" href="/u/leo-mao">@leo-mao</a>, you should not set world_size and rank in torch.distributed.init_process_group, they are automatically set by torch.distributed.launch.
So please change that to dist.init_process_group(backend=backend, init_method=“env://”)
Also, you should not set WORLD_SIZE, RANK env variables in your … | 977 | {'text': ['Hi! I hope it’s not too late.\n\nI had found this link pertaining to details regarding vgg-face model along with its weights in the link below. Scroll down to the vgg-face section and download your requirements.\n\n<a href="http://www.robots.ox.ac.uk/~albanie/pytorch-models.html" class="onebox" target="_blank" rel="nofollow noopener">http://www.robots.ox.ac.uk/~albanie/pytorch-models.html</a>\n\nHope this helps.'], 'answer_start': [977]} |
Multiprocessing failed with Torch.distributed.launch module | During training MNIST example dataset in PyTorch, I met the this RuntimeError on the Master node
File "./torch-dist/mnist-dist.py", line 201, in <module>
init_processes(args.rank, args.world_size, run, args.batch_size, backend=args.backend)
File "./torch-dist/mnist-dist.py", line 196, in init… | 2 | 2018-12-26T02:21:15.467Z | <a class="mention" href="/u/leo-mao">@leo-mao</a>, you should not set world_size and rank in torch.distributed.init_process_group, they are automatically set by torch.distributed.launch.
So please change that to dist.init_process_group(backend=backend, init_method=“env://”)
Also, you should not set WORLD_SIZE, RANK env variables in your … | 3 | 2019-01-03T19:48:26.202Z | https://discuss.pytorch.org/t/multiprocessing-failed-with-torch-distributed-launch-module/33056/7 | I think I did something similar where I kept all ratios by making the each making all the images the width, and height of the biggest image. Then set the image in the center and pad the empty spaces. I don’t know if there’s an function that does this automatically but I did it myself. but I made fun… Hi! I hope it’s not too late.
I had found this link pertaining to details regarding vgg-face model along with its weights in the link below. Scroll down to the vgg-face section and download your requirements.
<a href="http://www.robots.ox.ac.uk/~albanie/pytorch-models.html" class="onebox" target="_blank" rel="nofollow noopener">http://www.robots.ox.ac.uk/~albanie/pytorch-models.html</a>
Hope this helps. <a class="mention" href="/u/leo-mao">@leo-mao</a>, you should not set world_size and rank in torch.distributed.init_process_group, they are automatically set by torch.distributed.launch.
So please change that to dist.init_process_group(backend=backend, init_method=“env://”)
Also, you should not set WORLD_SIZE, RANK env variables in your … | 719 | {'text': ['<a class="mention" href="/u/leo-mao">@leo-mao</a>, you should not set world_size and rank in torch.distributed.init_process_group, they are automatically set by torch.distributed.launch.\n\nSo please change that to dist.init_process_group(backend=backend, init_method=“env://”)\n\nAlso, you should not set WORLD_SIZE, RANK env variables in your …'], 'answer_start': [719]} |
DataLoader - using SubsetRandomSampler and WeightedRandomSampler at the same time | I have a dataset that contains both the training and validation set. I am aware that I can use the SubsetRandomSampler to split the dataset into the training and validation subsets. The dataset however, has an unbalanced class ratio. How can I also use the WeightedRandomSampler together with the Sub… | 4 | 2018-11-18T16:44:14.957Z | That’s an interesting use case!
Basically you could just use the subset indices to create your WeightedRandomSampler, i.e. calculate the class imbalance, weights etc.
Here is a small example:
# Create dummy data with class imbalance 99 to 1
numDataPoints = 1000
data_dim = 5
bs = 100
data = torch.… | 7 | 2018-11-18T22:18:54.470Z | https://discuss.pytorch.org/t/dataloader-using-subsetrandomsampler-and-weightedrandomsampler-at-the-same-time/29907/2 | That’s an interesting use case!
Basically you could just use the subset indices to create your WeightedRandomSampler, i.e. calculate the class imbalance, weights etc.
Here is a small example:
# Create dummy data with class imbalance 99 to 1
numDataPoints = 1000
data_dim = 5
bs = 100
data = torch.… This toy example works.
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer = nn.Linear(1, 1)
self.layer.weight.data.fill_(1)
self.lay… You can probably use a combination of tensor operations to compute your loss.
For example
def mse_loss(input, target):
return torch.sum((input - target) ** 2)
def weighted_mse_loss(input, target, weight):
return torch.sum(weight * (input - target) ** 2) | 2,136 | {'text': ['That’s an interesting use case!\n\nBasically you could just use the subset indices to create your WeightedRandomSampler, i.e. calculate the class imbalance, weights etc.\n\nHere is a small example:\n\n# Create dummy data with class imbalance 99 to 1\n\nnumDataPoints = 1000\n\ndata_dim = 5\n\nbs = 100\n\ndata = torch.…'], 'answer_start': [2136]} |
How to set different learning rate for weight and bias in one layer? | In Caffe, we can set different learning rate for weight and bias in one layer.
For example:
layer {
name: "conv2"
type: "Convolution"
bottom: "bn_conv2"
top: "conv2"
param {
lr_mult: 1.000000*
}
param {
lr_mult: 0.100000
}
convolution_param {
… | 3 | 2018-02-08T15:22:56.820Z | This toy example works.
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer = nn.Linear(1, 1)
self.layer.weight.data.fill_(1)
self.lay… | 7 | 2018-02-09T09:52:18.556Z | https://discuss.pytorch.org/t/how-to-set-different-learning-rate-for-weight-and-bias-in-one-layer/13450/6 | That’s an interesting use case!
Basically you could just use the subset indices to create your WeightedRandomSampler, i.e. calculate the class imbalance, weights etc.
Here is a small example:
# Create dummy data with class imbalance 99 to 1
numDataPoints = 1000
data_dim = 5
bs = 100
data = torch.… This toy example works.
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer = nn.Linear(1, 1)
self.layer.weight.data.fill_(1)
self.lay… You can probably use a combination of tensor operations to compute your loss.
For example
def mse_loss(input, target):
return torch.sum((input - target) ** 2)
def weighted_mse_loss(input, target, weight):
return torch.sum(weight * (input - target) ** 2) | 1,381 | {'text': ['This toy example works.\n\nimport torch\n\nimport torch.nn as nn\n\nimport torch.optim as optim\n\nfrom torch.autograd import Variable\n\nclass Net(nn.Module):\n\ndef __init__(self):\n\nsuper(Net, self).__init__()\n\nself.layer = nn.Linear(1, 1)\n\nself.layer.weight.data.fill_(1)\n\nself.lay…'], 'answer_start': [1381]} |
How to implement weighted mean square error? | Hello guys, I would like to implement below loss function which is a weighted mean square loss function:
<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/1X/64fce39f8376e6f940cb95f96aa227a5fb5eb9ff.jpg" data-download-href="https://discuss.pytorch.org/uploads/default/64fce39f8376e6f940cb95f96aa227a5fb5eb9ff" title="photo_۲۰۱۷-۰۵-۰۱_۱۳-۵۳-۲۲.jpg">[image]</a>
How can I implement such a lost function in pytorch? In another words, Is there any way to use nn.MSELoss to achieve to my mentioned loss function? | 0 | 2017-05-01T13:50:47.019Z | You can probably use a combination of tensor operations to compute your loss.
For example
def mse_loss(input, target):
return torch.sum((input - target) ** 2)
def weighted_mse_loss(input, target, weight):
return torch.sum(weight * (input - target) ** 2) | 11 | 2017-05-01T14:25:48.750Z | https://discuss.pytorch.org/t/how-to-implement-weighted-mean-square-error/2547/2 | That’s an interesting use case!
Basically you could just use the subset indices to create your WeightedRandomSampler, i.e. calculate the class imbalance, weights etc.
Here is a small example:
# Create dummy data with class imbalance 99 to 1
numDataPoints = 1000
data_dim = 5
bs = 100
data = torch.… This toy example works.
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.layer = nn.Linear(1, 1)
self.layer.weight.data.fill_(1)
self.lay… You can probably use a combination of tensor operations to compute your loss.
For example
def mse_loss(input, target):
return torch.sum((input - target) ** 2)
def weighted_mse_loss(input, target, weight):
return torch.sum(weight * (input - target) ** 2) | 594 | {'text': ['You can probably use a combination of tensor operations to compute your loss.\n\nFor example\n\ndef mse_loss(input, target):\n\nreturn torch.sum((input - target) ** 2)\n\ndef weighted_mse_loss(input, target, weight):\n\nreturn torch.sum(weight * (input - target) ** 2)'], 'answer_start': [594]} |
How to make a tensor part of model parameters? | I have a parameter that is learnable, I want the model to update it. Here is how I attached it to the model:
class Dan(nn.Module):
def __init__(self):
super(Dan, self).__init__()
blah blah blah
self.alpha = t.tensor(0.5, requires_grad=True).cuda()
It is alpha. However, after tr… | 2 | 2019-07-19T07:36:29.026Z | Although the tensor was defined in the __init__ method, it won’t show in the interal parameters:
class Dan(nn.Module):
def __init__(self):
super(Dan, self).__init__()
self.alpha = torch.tensor(0.5, requires_grad=True)
model = Dan()
print(list(model.parameters()))
> []
As <a class="mention" href="/u/mazhar_shaikh">@Mazh…</a> | 16 | 2019-07-19T22:33:13.215Z | https://discuss.pytorch.org/t/how-to-make-a-tensor-part-of-model-parameters/51037/7 | Although the tensor was defined in the __init__ method, it won’t show in the interal parameters:
class Dan(nn.Module):
def __init__(self):
super(Dan, self).__init__()
self.alpha = torch.tensor(0.5, requires_grad=True)
model = Dan()
print(list(model.parameters()))
> []
As <a class="mention" href="/u/mazhar_shaikh">@Mazh…</a> This would split the dataset before using any of the PyTorch classes.
You would get different splits and create different Dataset classes:
X = np.random.randn(1000, 2)
y = np.random.randint(0, 10, size=1000)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.1, stratify=y)
np.un… I think the easiest approach would be to specify reduction='none' in your criterion and then multiply each output with your weights:
target = torch.tensor([[0,1,0,1,0,0]], dtype=torch.float32)
output = torch.randn(1, 6, requires_grad=True)
weights = torch.tensor([0.16, 0.16, 0.25, 0.25, 0.083, 0.08… | 1,704 | {'text': ['Although the tensor was defined in the __init__ method, it won’t show in the interal parameters:\n\nclass Dan(nn.Module):\n\ndef __init__(self):\n\nsuper(Dan, self).__init__()\n\nself.alpha = torch.tensor(0.5, requires_grad=True)\n\nmodel = Dan()\n\nprint(list(model.parameters()))\n\n> []\n\nAs <a class="mention" href="/u/mazhar_shaikh">@Mazh…</a>'], 'answer_start': [1704]} |
How to split test and train data keeping equal proportions of each class? | Suppose I have a dataset with the following classes:
Class A: 3000 items
Class B: 1000 items
Class C: 2000 items
I want to split this dataset in two parts so that there are 25% data in test set. However, how can I do this so that equal percentage of each class is present in the test set? These i… | 2 | 2018-07-12T12:59:32.552Z | This would split the dataset before using any of the PyTorch classes.
You would get different splits and create different Dataset classes:
X = np.random.randn(1000, 2)
y = np.random.randint(0, 10, size=1000)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.1, stratify=y)
np.un… | 11 | 2018-07-12T18:36:47.528Z | https://discuss.pytorch.org/t/how-to-split-test-and-train-data-keeping-equal-proportions-of-each-class/21063/7 | Although the tensor was defined in the __init__ method, it won’t show in the interal parameters:
class Dan(nn.Module):
def __init__(self):
super(Dan, self).__init__()
self.alpha = torch.tensor(0.5, requires_grad=True)
model = Dan()
print(list(model.parameters()))
> []
As <a class="mention" href="/u/mazhar_shaikh">@Mazh…</a> This would split the dataset before using any of the PyTorch classes.
You would get different splits and create different Dataset classes:
X = np.random.randn(1000, 2)
y = np.random.randint(0, 10, size=1000)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.1, stratify=y)
np.un… I think the easiest approach would be to specify reduction='none' in your criterion and then multiply each output with your weights:
target = torch.tensor([[0,1,0,1,0,0]], dtype=torch.float32)
output = torch.randn(1, 6, requires_grad=True)
weights = torch.tensor([0.16, 0.16, 0.25, 0.25, 0.083, 0.08… | 1,196 | {'text': ['This would split the dataset before using any of the PyTorch classes.\n\nYou would get different splits and create different Dataset classes:\n\nX = np.random.randn(1000, 2)\n\ny = np.random.randint(0, 10, size=1000)\n\nX_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.1, stratify=y)\n\nnp.un…'], 'answer_start': [1196]} |
Multi-Label, Multi-Class class imbalance | Hi, I have implemented a network for multi-label, multi-class classification, this has been done using BCEWithLogits outputting to 6 sigmoid units. However, I have a class imbalance and was wondering if there were a way to weight such classes in the multi-label sense.
I have labels in the following… | 3 | 2019-02-18T17:47:27.033Z | I think the easiest approach would be to specify reduction='none' in your criterion and then multiply each output with your weights:
target = torch.tensor([[0,1,0,1,0,0]], dtype=torch.float32)
output = torch.randn(1, 6, requires_grad=True)
weights = torch.tensor([0.16, 0.16, 0.25, 0.25, 0.083, 0.08… | 15 | 2019-02-18T20:02:06.098Z | https://discuss.pytorch.org/t/multi-label-multi-class-class-imbalance/37573/2 | Although the tensor was defined in the __init__ method, it won’t show in the interal parameters:
class Dan(nn.Module):
def __init__(self):
super(Dan, self).__init__()
self.alpha = torch.tensor(0.5, requires_grad=True)
model = Dan()
print(list(model.parameters()))
> []
As <a class="mention" href="/u/mazhar_shaikh">@Mazh…</a> This would split the dataset before using any of the PyTorch classes.
You would get different splits and create different Dataset classes:
X = np.random.randn(1000, 2)
y = np.random.randint(0, 10, size=1000)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.1, stratify=y)
np.un… I think the easiest approach would be to specify reduction='none' in your criterion and then multiply each output with your weights:
target = torch.tensor([[0,1,0,1,0,0]], dtype=torch.float32)
output = torch.randn(1, 6, requires_grad=True)
weights = torch.tensor([0.16, 0.16, 0.25, 0.25, 0.083, 0.08… | 654 | {'text': ['I think the easiest approach would be to specify reduction='none' in your criterion and then multiply each output with your weights:\n\ntarget = torch.tensor([[0,1,0,1,0,0]], dtype=torch.float32)\n\noutput = torch.randn(1, 6, requires_grad=True)\n\nweights = torch.tensor([0.16, 0.16, 0.25, 0.25, 0.083, 0.08…'], 'answer_start': [654]} |
TypeError: batch must contain tensors, numbers, dicts or lists; found object | Hello Everyone!
I am rather new to PyTorch and I am trying to implement a previous project I had in TF in pytorch.
While testing my code so far I get the following error message:
Traceback (most recent call last):
File "data2test.py", line 122, in <module>
train(epoch)
File "data2test.py"… | 4 | 2018-03-09T11:53:49.214Z | You need to wrap the data with transforms.Compose before you return it.
For example add to the __init__:
self.transform = transforms.Compose([transforms.ToTensor()]) # you can add to the list all the transformations you need.
and in __getitem__ do:
return self.transform(self.x_data[index]), se… | 5 | 2018-03-09T12:24:22.157Z | https://discuss.pytorch.org/t/typeerror-batch-must-contain-tensors-numbers-dicts-or-lists-found-object/14665/4 | You need to wrap the data with transforms.Compose before you return it.
For example add to the __init__:
self.transform = transforms.Compose([transforms.ToTensor()]) # you can add to the list all the transformations you need.
and in __getitem__ do:
return self.transform(self.x_data[index]), se… I think you are looking for <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.pad_sequence" rel="nofollow noopener">torch.nn.utils.rnn.pad_sequence</a>.
If you want to do this manually:
One greatly underappreciated (to my mind) feature of PyTorch is that you can allocate a tensor of zeros (of the right type) and then copy to slices without breaking the autograd link. This is what pad_se… The average of the batch losses will give you an estimate of the “epoch loss” during training.
Since you are calculating the loss anyway, you could just sum it and calculate the mean after the epoch finishes.
This training loss is used to see, how well your model performs on the training dataset.
… | 1,944 | {'text': ['You need to wrap the data with transforms.Compose before you return it.\n\nFor example add to the __init__:\n\nself.transform = transforms.Compose([transforms.ToTensor()]) # you can add to the list all the transformations you need.\n\nand in __getitem__ do:\n\nreturn self.transform(self.x_data[index]), se…'], 'answer_start': [1944]} |
How to do padding based on lengths? | I have a list of sequences and I padded it to the same length (emb_len). I have a separate tensor that I want to concat it to every data point in the sequences.
Intuitively, it is something like this
a b c d e f g 0 0 0
u u u u u u u u u u
h i j k l 0 0 0 0 0
u u u u u u u u u u
but the correc… | 2 | 2018-09-04T01:14:06.182Z | I think you are looking for <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.pad_sequence" rel="nofollow noopener">torch.nn.utils.rnn.pad_sequence</a>.
If you want to do this manually:
One greatly underappreciated (to my mind) feature of PyTorch is that you can allocate a tensor of zeros (of the right type) and then copy to slices without breaking the autograd link. This is what pad_se… | 5 | 2018-09-04T07:57:26.576Z | https://discuss.pytorch.org/t/how-to-do-padding-based-on-lengths/24442/2 | You need to wrap the data with transforms.Compose before you return it.
For example add to the __init__:
self.transform = transforms.Compose([transforms.ToTensor()]) # you can add to the list all the transformations you need.
and in __getitem__ do:
return self.transform(self.x_data[index]), se… I think you are looking for <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.pad_sequence" rel="nofollow noopener">torch.nn.utils.rnn.pad_sequence</a>.
If you want to do this manually:
One greatly underappreciated (to my mind) feature of PyTorch is that you can allocate a tensor of zeros (of the right type) and then copy to slices without breaking the autograd link. This is what pad_se… The average of the batch losses will give you an estimate of the “epoch loss” during training.
Since you are calculating the loss anyway, you could just sum it and calculate the mean after the epoch finishes.
This training loss is used to see, how well your model performs on the training dataset.
… | 1,280 | {'text': ['I think you are looking for <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.pad_sequence" rel="nofollow noopener">torch.nn.utils.rnn.pad_sequence</a>.\n\nIf you want to do this manually:\n\nOne greatly underappreciated (to my mind) feature of PyTorch is that you can allocate a tensor of zeros (of the right type) and then copy to slices without breaking the autograd link. This is what pad_se…'], 'answer_start': [1280]} |
What is loss.item() | what does
running_loss
in this code ? i know it calculated the loss , and we need to get the probability .
please take a look at the comment sections
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
# this loop through 938 images and labels (length of trainloader
… | 1 | 2019-11-16T20:20:30.523Z | The average of the batch losses will give you an estimate of the “epoch loss” during training.
Since you are calculating the loss anyway, you could just sum it and calculate the mean after the epoch finishes.
This training loss is used to see, how well your model performs on the training dataset.
… | 6 | 2019-11-17T09:41:08.729Z | https://discuss.pytorch.org/t/what-is-loss-item/61218/6 | You need to wrap the data with transforms.Compose before you return it.
For example add to the __init__:
self.transform = transforms.Compose([transforms.ToTensor()]) # you can add to the list all the transformations you need.
and in __getitem__ do:
return self.transform(self.x_data[index]), se… I think you are looking for <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.utils.rnn.pad_sequence" rel="nofollow noopener">torch.nn.utils.rnn.pad_sequence</a>.
If you want to do this manually:
One greatly underappreciated (to my mind) feature of PyTorch is that you can allocate a tensor of zeros (of the right type) and then copy to slices without breaking the autograd link. This is what pad_se… The average of the batch losses will give you an estimate of the “epoch loss” during training.
Since you are calculating the loss anyway, you could just sum it and calculate the mean after the epoch finishes.
This training loss is used to see, how well your model performs on the training dataset.
… | 726 | {'text': ['The average of the batch losses will give you an estimate of the “epoch loss” during training.\n\nSince you are calculating the loss anyway, you could just sum it and calculate the mean after the epoch finishes.\n\nThis training loss is used to see, how well your model performs on the training dataset.\n\n…'], 'answer_start': [726]} |
Cross-entropy with one-hot targets | I’d like to use the cross-entropy loss function that can take one-hot encoded values as the target.
# Fake NN output
out = torch.FloatTensor([[0.05, 0.9, 0.05], [0.05, 0.05, 0.9], [0.9, 0.05, 0.05]])
out = torch.autograd.Variable(out)
# Categorical targets
y = torch.LongTensor([1, 2, 0])
y = torch… | 1 | 2018-02-12T22:29:41.428Z | Ah I see. Thank you for your clarification.
BCELoss doesn’t quite do what you want it to do, because it has that extra term on the right (and I presume you only want the term on the left?)
<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/c/c98570d1854f9fafa103a4ea9b6b9e6db3a22838.png" data-download-href="https://discuss.pytorch.org/uploads/default/c98570d1854f9fafa103a4ea9b6b9e6db3a22838" title="image.png">[image]</a>
There’s no built in PyTorch function to do this right now, but you can use the cross_entropy functi… | 0 | 2018-02-13T22:18:17.646Z | https://discuss.pytorch.org/t/cross-entropy-with-one-hot-targets/13580/6 | Ah I see. Thank you for your clarification.
BCELoss doesn’t quite do what you want it to do, because it has that extra term on the right (and I presume you only want the term on the left?)
<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/c/c98570d1854f9fafa103a4ea9b6b9e6db3a22838.png" data-download-href="https://discuss.pytorch.org/uploads/default/c98570d1854f9fafa103a4ea9b6b9e6db3a22838" title="image.png">[image]</a>
There’s no built in PyTorch function to do this right now, but you can use the cross_entropy functi… I asked on Stack Overflow and got <a href="https://stackoverflow.com/questions/54746829/pytorch-whats-the-difference-between-state-dict-and-parameters" rel="noopener nofollow ugc">this answer</a>.
If something should be added or subtracted from it, please let me know.
Otherwise, I will accept it here as well. Your solution should read
T = torch.cat([T[0:i], T[i+1:]])
or equivalently
T = torch.cat([T[:i], T[i+1:]])
(but there is probably a better way to do this) | 2,070 | {'text': ['Ah I see. Thank you for your clarification.\n\nBCELoss doesn’t quite do what you want it to do, because it has that extra term on the right (and I presume you only want the term on the left?)\n\n<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/c/c98570d1854f9fafa103a4ea9b6b9e6db3a22838.png" data-download-href="https://discuss.pytorch.org/uploads/default/c98570d1854f9fafa103a4ea9b6b9e6db3a22838" title="image.png">[image]</a>\n\nThere’s no built in PyTorch function to do this right now, but you can use the cross_entropy functi…'], 'answer_start': [2070]} |
Difference between state_dict and parameters() | I saw two methods of accessing a model’s parameters:
<a href="https://stackoverflow.com/questions/49446785/how-can-i-update-the-parameters-of-a-neural-network-in-pytorch" rel="nofollow noopener">using state_dict</a>
<a href="https://stackoverflow.com/questions/49201236/check-the-total-number-of-parameters-in-a-pytorch-model" rel="nofollow noopener">using parameters()</a>
Which is more correct?
What are the differences?
Thanks | 4 | 2019-02-18T12:09:58.328Z | I asked on Stack Overflow and got <a href="https://stackoverflow.com/questions/54746829/pytorch-whats-the-difference-between-state-dict-and-parameters" rel="noopener nofollow ugc">this answer</a>.
If something should be added or subtracted from it, please let me know.
Otherwise, I will accept it here as well. | 0 | 2020-12-23T13:35:16.908Z | https://discuss.pytorch.org/t/difference-between-state-dict-and-parameters/37531/10 | Ah I see. Thank you for your clarification.
BCELoss doesn’t quite do what you want it to do, because it has that extra term on the right (and I presume you only want the term on the left?)
<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/c/c98570d1854f9fafa103a4ea9b6b9e6db3a22838.png" data-download-href="https://discuss.pytorch.org/uploads/default/c98570d1854f9fafa103a4ea9b6b9e6db3a22838" title="image.png">[image]</a>
There’s no built in PyTorch function to do this right now, but you can use the cross_entropy functi… I asked on Stack Overflow and got <a href="https://stackoverflow.com/questions/54746829/pytorch-whats-the-difference-between-state-dict-and-parameters" rel="noopener nofollow ugc">this answer</a>.
If something should be added or subtracted from it, please let me know.
Otherwise, I will accept it here as well. Your solution should read
T = torch.cat([T[0:i], T[i+1:]])
or equivalently
T = torch.cat([T[:i], T[i+1:]])
(but there is probably a better way to do this) | 1,601 | {'text': ['I asked on Stack Overflow and got <a href="https://stackoverflow.com/questions/54746829/pytorch-whats-the-difference-between-state-dict-and-parameters" rel="noopener nofollow ugc">this answer</a>.\n\nIf something should be added or subtracted from it, please let me know.\n\nOtherwise, I will accept it here as well.'], 'answer_start': [1601]} |
How to remove an element from a 1-d tensor by index? | So I have a 1-d tensor T and an index i and need to remove i-th element from a tensor T, much like in pure python T.remove(i).
I’ve tried to do this:
i = 2
T = torch.tensor([1,2,3,4,5])
T = torch.cat([T[0:i], T[i+1:-1]])
But it fails to bring in the last element (5 in this case).
Any suggestions… | 1 | 2018-08-14T12:36:40.376Z | Your solution should read
T = torch.cat([T[0:i], T[i+1:]])
or equivalently
T = torch.cat([T[:i], T[i+1:]])
(but there is probably a better way to do this) | 2 | 2018-08-14T13:07:25.387Z | https://discuss.pytorch.org/t/how-to-remove-an-element-from-a-1-d-tensor-by-index/23109/3 | Ah I see. Thank you for your clarification.
BCELoss doesn’t quite do what you want it to do, because it has that extra term on the right (and I presume you only want the term on the left?)
<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/c/c98570d1854f9fafa103a4ea9b6b9e6db3a22838.png" data-download-href="https://discuss.pytorch.org/uploads/default/c98570d1854f9fafa103a4ea9b6b9e6db3a22838" title="image.png">[image]</a>
There’s no built in PyTorch function to do this right now, but you can use the cross_entropy functi… I asked on Stack Overflow and got <a href="https://stackoverflow.com/questions/54746829/pytorch-whats-the-difference-between-state-dict-and-parameters" rel="noopener nofollow ugc">this answer</a>.
If something should be added or subtracted from it, please let me know.
Otherwise, I will accept it here as well. Your solution should read
T = torch.cat([T[0:i], T[i+1:]])
or equivalently
T = torch.cat([T[:i], T[i+1:]])
(but there is probably a better way to do this) | 879 | {'text': ['Your solution should read\n\nT = torch.cat([T[0:i], T[i+1:]])\n\nor equivalently\n\nT = torch.cat([T[:i], T[i+1:]])\n\n(but there is probably a better way to do this)'], 'answer_start': [879]} |
Adding new parameters | I’d like to add a new Parameter to my network. I have successfully created one, incorporated it into forward() and have a grad calcualted in backward(). However when I apply optimizer.step() the grad is not applied. Searching through here I have seen the register_parameter() function. This adds … | 3 | 2018-02-10T18:10:01.552Z | When you add new parameter:
assign the parameter to attribute of the module
add it to the optimizer via optim.add_param_group({"params": my_new_param}) | 5 | 2018-02-10T23:08:48.077Z | https://discuss.pytorch.org/t/adding-new-parameters/13534/5 | When you add new parameter:
assign the parameter to attribute of the module
add it to the optimizer via optim.add_param_group({"params": my_new_param}) I would recommend looking into HDF5. The handling is similar to numpy arrays (with the indexing), but the dataset is not loaded into memory until you access it. I just wrote a quick example for converting a CSV into HDF5 using Iris for illustration purposes; here, imagine the iris dataset is a super… Here is a small example:
class MyDataset(Dataset):
def __init__(self, subset, transform=None):
self.subset = subset
self.transform = transform
def __getitem__(self, index):
x, y = self.subset[index]
if self.transform:
x = self.transform(x… | 2,074 | {'text': ['When you add new parameter:\n\nassign the parameter to attribute of the module\n\nadd it to the optimizer via optim.add_param_group({"params": my_new_param})'], 'answer_start': [2074]} |
Data processing as a batch way | Hi everyone,
Here is my problem:
suppose I have a 1G .csv file, then I will process it which will make it expand to 30G. It’s unacceptable to load the whole file into memory then process it, so I’m considering to use Dataset & Dataloader to do that.
Anyone can tell me how to do that in detail?
T… | 4 | 2018-02-28T21:12:02.628Z | I would recommend looking into HDF5. The handling is similar to numpy arrays (with the indexing), but the dataset is not loaded into memory until you access it. I just wrote a quick example for converting a CSV into HDF5 using Iris for illustration purposes; here, imagine the iris dataset is a super… | 6 | 2018-03-02T16:48:16.578Z | https://discuss.pytorch.org/t/data-processing-as-a-batch-way/14154/9 | When you add new parameter:
assign the parameter to attribute of the module
add it to the optimizer via optim.add_param_group({"params": my_new_param}) I would recommend looking into HDF5. The handling is similar to numpy arrays (with the indexing), but the dataset is not loaded into memory until you access it. I just wrote a quick example for converting a CSV into HDF5 using Iris for illustration purposes; here, imagine the iris dataset is a super… Here is a small example:
class MyDataset(Dataset):
def __init__(self, subset, transform=None):
self.subset = subset
self.transform = transform
def __getitem__(self, index):
x, y = self.subset[index]
if self.transform:
x = self.transform(x… | 1,201 | {'text': ['I would recommend looking into HDF5. The handling is similar to numpy arrays (with the indexing), but the dataset is not loaded into memory until you access it. I just wrote a quick example for converting a CSV into HDF5 using Iris for illustration purposes; here, imagine the iris dataset is a super…'], 'answer_start': [1201]} |
Torch.utils.data.dataset.random_split | Hi,
torch.utils.data.dataset.random_split returns a Subset object which has no transforms attribute. How can I split a Dataset object and return another Dataset object with the same transforms attribute?
Thanks | 1 | 2018-12-15T11:25:55.740Z | Here is a small example:
class MyDataset(Dataset):
def __init__(self, subset, transform=None):
self.subset = subset
self.transform = transform
def __getitem__(self, index):
x, y = self.subset[index]
if self.transform:
x = self.transform(x… | 19 | 2018-12-16T22:34:16.011Z | https://discuss.pytorch.org/t/torch-utils-data-dataset-random-split/32209/4 | When you add new parameter:
assign the parameter to attribute of the module
add it to the optimizer via optim.add_param_group({"params": my_new_param}) I would recommend looking into HDF5. The handling is similar to numpy arrays (with the indexing), but the dataset is not loaded into memory until you access it. I just wrote a quick example for converting a CSV into HDF5 using Iris for illustration purposes; here, imagine the iris dataset is a super… Here is a small example:
class MyDataset(Dataset):
def __init__(self, subset, transform=None):
self.subset = subset
self.transform = transform
def __getitem__(self, index):
x, y = self.subset[index]
if self.transform:
x = self.transform(x… | 473 | {'text': ['Here is a small example:\n\nclass MyDataset(Dataset):\n\ndef __init__(self, subset, transform=None):\n\nself.subset = subset\n\nself.transform = transform\n\ndef __getitem__(self, index):\n\nx, y = self.subset[index]\n\nif self.transform:\n\nx = self.transform(x…'], 'answer_start': [473]} |
Fill diagonal of matrix with zero | I have a very large n x n tensor and I want to fill its diagonal values to zero, granting backwardness. How can it be done? Currently the solution I have in mind is this
t1 = torch.rand(n, n)
t1 = t1 * (torch.ones(n, n) - torch.eye(n, n))
However if n is large this can potentially require a lot of… | 3 | 2019-01-19T11:40:16.950Z | another solution:
t = torch.randn(n, n)
mask = torch.eye(n, n).byte()
t.masked_fill_(mask, 0) | 3 | 2019-01-20T05:25:01.056Z | https://discuss.pytorch.org/t/fill-diagonal-of-matrix-with-zero/35083/6 | another solution:
t = torch.randn(n, n)
mask = torch.eye(n, n).byte()
t.masked_fill_(mask, 0) That’s a bit tricky, I think. But it’s doable, of course.
It is tricky because PyTorch only allows you to compute derivatives of scalars with respect to multidimensional Tensors. Thus, you have to iterate through every single scalar parameter in your model (i.e., every entry in every parameter matr… You could create a custom transformation:
class AddGaussianNoise(object):
def __init__(self, mean=0., std=1.):
self.std = std
self.mean = mean
def __call__(self, tensor):
return tensor + torch.randn(tensor.size()) * self.std + self.mean
def __repr__… | 1,454 | {'text': ['another solution:\n\nt = torch.randn(n, n)\n\nmask = torch.eye(n, n).byte()\n\nt.masked_fill_(mask, 0)'], 'answer_start': [1454]} |
How to calculate 2nd derivative of a likelihood function | I want to calculate the diagonal of 2nd derivative of a function (likelihood function for example), but I didn’t find any corresponding documents supporting that?
Can anyone give me an example?
I really appreciate that.
Thanks a lot. | 4 | 2018-03-17T20:26:08.187Z | That’s a bit tricky, I think. But it’s doable, of course.
It is tricky because PyTorch only allows you to compute derivatives of scalars with respect to multidimensional Tensors. Thus, you have to iterate through every single scalar parameter in your model (i.e., every entry in every parameter matr… | 6 | 2018-03-21T11:58:43.028Z | https://discuss.pytorch.org/t/how-to-calculate-2nd-derivative-of-a-likelihood-function/15085/8 | another solution:
t = torch.randn(n, n)
mask = torch.eye(n, n).byte()
t.masked_fill_(mask, 0) That’s a bit tricky, I think. But it’s doable, of course.
It is tricky because PyTorch only allows you to compute derivatives of scalars with respect to multidimensional Tensors. Thus, you have to iterate through every single scalar parameter in your model (i.e., every entry in every parameter matr… You could create a custom transformation:
class AddGaussianNoise(object):
def __init__(self, mean=0., std=1.):
self.std = std
self.mean = mean
def __call__(self, tensor):
return tensor + torch.randn(tensor.size()) * self.std + self.mean
def __repr__… | 824 | {'text': ['That’s a bit tricky, I think. But it’s doable, of course.\n\nIt is tricky because PyTorch only allows you to compute derivatives of scalars with respect to multidimensional Tensors. Thus, you have to iterate through every single scalar parameter in your model (i.e., every entry in every parameter matr…'], 'answer_start': [824]} |
How to add noise to MNIST dataset when using pytorch | I want to add noise to MNIST. I am using the following code to read the dataset:
train_loader = torch.utils.data.DataLoader(
datasets.MNIST('../data', train=True, download=True,
transform=transforms.Compose([
transforms.ToTensor(),
… | 1 | 2019-11-01T06:28:36.890Z | You could create a custom transformation:
class AddGaussianNoise(object):
def __init__(self, mean=0., std=1.):
self.std = std
self.mean = mean
def __call__(self, tensor):
return tensor + torch.randn(tensor.size()) * self.std + self.mean
def __repr__… | 32 | 2019-11-01T12:33:28.371Z | https://discuss.pytorch.org/t/how-to-add-noise-to-mnist-dataset-when-using-pytorch/59745/2 | another solution:
t = torch.randn(n, n)
mask = torch.eye(n, n).byte()
t.masked_fill_(mask, 0) That’s a bit tricky, I think. But it’s doable, of course.
It is tricky because PyTorch only allows you to compute derivatives of scalars with respect to multidimensional Tensors. Thus, you have to iterate through every single scalar parameter in your model (i.e., every entry in every parameter matr… You could create a custom transformation:
class AddGaussianNoise(object):
def __init__(self, mean=0., std=1.):
self.std = std
self.mean = mean
def __call__(self, tensor):
return tensor + torch.randn(tensor.size()) * self.std + self.mean
def __repr__… | 406 | {'text': ['You could create a custom transformation:\n\nclass AddGaussianNoise(object):\n\ndef __init__(self, mean=0., std=1.):\n\nself.std = std\n\nself.mean = mean\n\ndef __call__(self, tensor):\n\nreturn tensor + torch.randn(tensor.size()) * self.std + self.mean\n\ndef __repr__…'], 'answer_start': [406]} |
How to re-set alll parameters in a network | How to re-set the weights for the entire network, using the original pytorch weight initialization | 2 | 2018-07-06T18:13:27.432Z | Here is the code with an example that runs:
def lp_norm(mdl: nn.Module, p: int = 2) -> Tensor:
lp_norms = [w.norm(p) for name, w in mdl.named_parameters()]
return sum(lp_norms)
def reset_all_weights(model: nn.Module) -> None:
"""
refs:
- https://discuss.pytorch.org/t/how-to… | 0 | 2021-11-09T22:05:43.323Z | https://discuss.pytorch.org/t/how-to-re-set-alll-parameters-in-a-network/20819/12 | Here is the code with an example that runs:
def lp_norm(mdl: nn.Module, p: int = 2) -> Tensor:
lp_norms = [w.norm(p) for name, w in mdl.named_parameters()]
return sum(lp_norms)
def reset_all_weights(model: nn.Module) -> None:
"""
refs:
- https://discuss.pytorch.org/t/how-to… Could you show a minimum example? The following code works for me for PyTorch 1.1.0:
import torch
a = torch.zero(300000000, dtype=torch.int8, device='cuda')
b = torch.zero(300000000, dtype=torch.int8, device='cuda')
# Check GPU memory using nvidia-smi
del a
torch.cuda.empty_cache()
# Check GPU memo… What kind of error do you get?
This should work:
class MyModel(nn.Module):
def __init__(self, split_gpus):
self.large_submodule1 = ...
self.large_submodule2 = ...
self.split_gpus = split_gpus
if split_gpus:
self.large_submodule1.cuda(0)
… | 1,340 | {'text': ['Here is the code with an example that runs:\n\ndef lp_norm(mdl: nn.Module, p: int = 2) -> Tensor:\n\nlp_norms = [w.norm(p) for name, w in mdl.named_parameters()]\n\nreturn sum(lp_norms)\n\ndef reset_all_weights(model: nn.Module) -> None:\n\n"""\n\nrefs:\n\n- https://discuss.pytorch.org/t/how-to…'], 'answer_start': [1340]} |
How to delete a Tensor in GPU to free up memory | How to delete a Tensor in GPU to free up memory?
I can get a Tensor in GPU by Tensor.cuda(), but it just returns a copy in GPU. I wonder how can I delete this Tensor in GPU? I try to delete it with “del Tnesor” but it doesn’t work. | 0 | 2019-06-25T05:03:52.552Z | Could you show a minimum example? The following code works for me for PyTorch 1.1.0:
import torch
a = torch.zero(300000000, dtype=torch.int8, device='cuda')
b = torch.zero(300000000, dtype=torch.int8, device='cuda')
# Check GPU memory using nvidia-smi
del a
torch.cuda.empty_cache()
# Check GPU memo… | 5 | 2019-06-26T03:27:36.898Z | https://discuss.pytorch.org/t/how-to-delete-a-tensor-in-gpu-to-free-up-memory/48879/6 | Here is the code with an example that runs:
def lp_norm(mdl: nn.Module, p: int = 2) -> Tensor:
lp_norms = [w.norm(p) for name, w in mdl.named_parameters()]
return sum(lp_norms)
def reset_all_weights(model: nn.Module) -> None:
"""
refs:
- https://discuss.pytorch.org/t/how-to… Could you show a minimum example? The following code works for me for PyTorch 1.1.0:
import torch
a = torch.zero(300000000, dtype=torch.int8, device='cuda')
b = torch.zero(300000000, dtype=torch.int8, device='cuda')
# Check GPU memory using nvidia-smi
del a
torch.cuda.empty_cache()
# Check GPU memo… What kind of error do you get?
This should work:
class MyModel(nn.Module):
def __init__(self, split_gpus):
self.large_submodule1 = ...
self.large_submodule2 = ...
self.split_gpus = split_gpus
if split_gpus:
self.large_submodule1.cuda(0)
… | 981 | {'text': ['Could you show a minimum example? The following code works for me for PyTorch 1.1.0:\n\nimport torch\n\na = torch.zero(300000000, dtype=torch.int8, device='cuda')\n\nb = torch.zero(300000000, dtype=torch.int8, device='cuda')\n\n# Check GPU memory using nvidia-smi\n\ndel a\n\ntorch.cuda.empty_cache()\n\n# Check GPU memo…'], 'answer_start': [981]} |
Split single model in multiple gpus | I would like to train a model where it contains 2 sub-modules. I would like to train sub-model 1 in one gpu and sub-model 2 in another gpu. How would i do in pytorch? I tried specifying cuda device separately for each sub-module but it throws an error.
Error: RuntimeError: tensors are on different … | 2 | 2018-02-04T00:12:45.702Z | What kind of error do you get?
This should work:
class MyModel(nn.Module):
def __init__(self, split_gpus):
self.large_submodule1 = ...
self.large_submodule2 = ...
self.split_gpus = split_gpus
if split_gpus:
self.large_submodule1.cuda(0)
… | 4 | 2018-02-04T00:18:40.165Z | https://discuss.pytorch.org/t/split-single-model-in-multiple-gpus/13239/2 | Here is the code with an example that runs:
def lp_norm(mdl: nn.Module, p: int = 2) -> Tensor:
lp_norms = [w.norm(p) for name, w in mdl.named_parameters()]
return sum(lp_norms)
def reset_all_weights(model: nn.Module) -> None:
"""
refs:
- https://discuss.pytorch.org/t/how-to… Could you show a minimum example? The following code works for me for PyTorch 1.1.0:
import torch
a = torch.zero(300000000, dtype=torch.int8, device='cuda')
b = torch.zero(300000000, dtype=torch.int8, device='cuda')
# Check GPU memory using nvidia-smi
del a
torch.cuda.empty_cache()
# Check GPU memo… What kind of error do you get?
This should work:
class MyModel(nn.Module):
def __init__(self, split_gpus):
self.large_submodule1 = ...
self.large_submodule2 = ...
self.split_gpus = split_gpus
if split_gpus:
self.large_submodule1.cuda(0)
… | 642 | {'text': ['What kind of error do you get?\n\nThis should work:\n\nclass MyModel(nn.Module):\n\ndef __init__(self, split_gpus):\n\nself.large_submodule1 = ...\n\nself.large_submodule2 = ...\n\nself.split_gpus = split_gpus\n\nif split_gpus:\n\nself.large_submodule1.cuda(0)\n\n…'], 'answer_start': [642]} |
Compute the Hessian matrix of a network | Hi, I am trying to compute Hessian matrix by calling twice autograd.grad() on a variable.
It works fine in a toy example:
a = torch.FloatTensor([1])
b = torch.FloatTensor([3])
a, b = Variable(a, requires_grad=True), Variable(b, requires_grad=True)
c = a + 3 * b**2
c = c.sum()
grad_b = torch.aut… | 4 | 2018-03-21T15:55:10.809Z | Use PyTorch’s <a href="https://en.wikipedia.org/wiki/Hessian_matrix" rel="noopener nofollow ugc">autograd.functional</a> library:
torch.autograd.functional.hessian(func, inputs) | 1 | 2021-03-31T10:50:34.480Z | https://discuss.pytorch.org/t/compute-the-hessian-matrix-of-a-network/15270/22 | Use PyTorch’s <a href="https://en.wikipedia.org/wiki/Hessian_matrix" rel="noopener nofollow ugc">autograd.functional</a> library:
torch.autograd.functional.hessian(func, inputs) For a given input shape, you can use the <a href="https://stackoverflow.com/a/66984386/9067615" rel="noopener nofollow ugc">torchinfo</a> (formerly torchsummary) package:
Torchinfo provides information complementary to what is provided by print(your_model) in PyTorch, similar to Tensorflow’s model.summary()…
Example:
from torchinfo import summary
model = ConvNet()
batch_size = 1… Please see my experiment using a linear model bellow.
<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/3X/e/6/e62aaed0c93e388566e5d85578b0bca426f42fb4.png" data-download-href="https://discuss.pytorch.org/uploads/default/e62aaed0c93e388566e5d85578b0bca426f42fb4" title="summary fig">[summary fig]</a>
<a href="https://gist.github.com/Tony-Y/9e3687fbe10e817596d1e1ed58c9f191" class="inline-onebox" rel="noopener nofollow ugc">MultiTaskLoss.ipynb · GitHub</a>
In this experiment, I used torch.stack instead of torch.Tensor to fix the reported bug of my original code as the following:
total_loss = torch.stack(loss) * torch.exp(-self.eta) + self.eta
total_l… | 1,792 | {'text': ['Use PyTorch’s <a href="https://en.wikipedia.org/wiki/Hessian_matrix" rel="noopener nofollow ugc">autograd.functional</a> library:\n\ntorch.autograd.functional.hessian(func, inputs)'], 'answer_start': [1792]} |
Subsets and Splits