name
stringlengths
15
255
question
stringlengths
20
1.77k
questionUpvotes
int64
0
23
timeCreated
stringlengths
24
24
answer
stringlengths
9
1.09k
answerUpvotes
int64
0
75
timeAnswered
stringlengths
24
24
answerURL
stringlengths
50
285
context
stringlengths
244
1.73k
answer_start
int64
0
3.45k
answers
stringlengths
46
1.14k
I cannot use the pytorch that was built successfully from source: (DLL) initialization routine failed. Error loading caffe2_detectron_ops_gpu.dll
I cannot post more than two links here as a beginner, and which was definitely needed. That is why the whole issue is at <a href="https://github.com/pytorch/pytorch/issues/43210" rel="nofollow noopener">https://github.com/pytorch/pytorch/issues/43210</a> but shall be answered here.
0
2020-08-18T15:03:26.566Z
After so many tries, the following has worked for me. I had to set ninja off. Ninja is in order to speed up the process, too bad that I cannot use it. Without ninja, it ran through the whole night for about 9.5 hours. I also needed to download the source code of MKL, and then, together with the mkl&hellip;
0
2020-10-20T14:30:04.888Z
https://discuss.pytorch.org/t/i-cannot-use-the-pytorch-that-was-built-successfully-from-source-dll-initialization-routine-failed-error-loading-caffe2-detectron-ops-gpu-dll/93243/20
After so many tries, the following has worked for me. I had to set ninja off. Ninja is in order to speed up the process, too bad that I cannot use it. Without ninja, it ran through the whole night for about 9.5 hours. I also needed to download the source code of MKL, and then, together with the mkl&hellip; Your target should contain class indices in the range [0, nb_classes-1]. In your case this would be [0, 23]. To debug this issue, you could add a print statement and check, which target batch fails this assumption. I had a look at the github issue and the relevant files. It is a tricky issue and it is caused by the line that updates the rng state (unnecessarily?): <a href="https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L437" rel="nofollow noopener">https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L437</a> I see there are 2 workarounds. The Dataloader code can be fi&hellip;
1,620
{'text': ['After so many tries, the following has worked for me.\n\nI had to set ninja off. Ninja is in order to speed up the process, too bad that I cannot use it. Without ninja, it ran through the whole night for about 9.5 hours. I also needed to download the source code of MKL, and then, together with the mkl&hellip;'], 'answer_start': [1620]}
Target size (torch.Size([32])) must be the same as input size (torch.Size([32, 24]))
data_transforms = { &#39;train&#39;: transforms.Compose([ transforms.ToPILImage(), transforms.RandomRotation(15), transforms.RandomHorizontalFlip(), transforms.ToTensor(), ]), &#39;valid&#39;: transforms.Compose([ transforms.ToPILImage(), transfo&hellip;
0
2019-09-06T16:24:39.059Z
Your target should contain class indices in the range [0, nb_classes-1]. In your case this would be [0, 23]. To debug this issue, you could add a print statement and check, which target batch fails this assumption.
1
2019-09-06T17:13:07.502Z
https://discuss.pytorch.org/t/target-size-torch-size-32-must-be-the-same-as-input-size-torch-size-32-24/55376/4
After so many tries, the following has worked for me. I had to set ninja off. Ninja is in order to speed up the process, too bad that I cannot use it. Without ninja, it ran through the whole night for about 9.5 hours. I also needed to download the source code of MKL, and then, together with the mkl&hellip; Your target should contain class indices in the range [0, nb_classes-1]. In your case this would be [0, 23]. To debug this issue, you could add a print statement and check, which target batch fails this assumption. I had a look at the github issue and the relevant files. It is a tricky issue and it is caused by the line that updates the rng state (unnecessarily?): <a href="https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L437" rel="nofollow noopener">https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L437</a> I see there are 2 workarounds. The Dataloader code can be fi&hellip;
1,119
{'text': ['Your target should contain class indices in the range [0, nb_classes-1].\n\nIn your case this would be [0, 23].\n\nTo debug this issue, you could add a print statement and check, which target batch fails this assumption.'], 'answer_start': [1119]}
[DataLoader Problem] Problem arises when shuffle = True
I have training , validation and test dataset(NLP problem , So I used LSTM , GRU) . The model contains batch norm layer (I think this is the reason for discrepancy I am observing). I don’t have true labels for test dataset. This was my training procedure before : Train on training dataset for 5 ep&hellip;
1
2019-05-19T14:18:27.105Z
I had a look at the github issue and the relevant files. It is a tricky issue and it is caused by the line that updates the rng state (unnecessarily?): <a href="https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L437" rel="nofollow noopener">https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L437</a> I see there are 2 workarounds. The Dataloader code can be fi&hellip;
1
2019-05-21T08:55:32.554Z
https://discuss.pytorch.org/t/dataloader-problem-problem-arises-when-shuffle-true/45631/23
After so many tries, the following has worked for me. I had to set ninja off. Ninja is in order to speed up the process, too bad that I cannot use it. Without ninja, it ran through the whole night for about 9.5 hours. I also needed to download the source code of MKL, and then, together with the mkl&hellip; Your target should contain class indices in the range [0, nb_classes-1]. In your case this would be [0, 23]. To debug this issue, you could add a print statement and check, which target batch fails this assumption. I had a look at the github issue and the relevant files. It is a tricky issue and it is caused by the line that updates the rng state (unnecessarily?): <a href="https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L437" rel="nofollow noopener">https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L437</a> I see there are 2 workarounds. The Dataloader code can be fi&hellip;
526
{'text': ['I had a look at the github issue and the relevant files.\n\nIt is a tricky issue and it is caused by the line that updates the rng state (unnecessarily?): <a href="https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L437" rel="nofollow noopener">https://github.com/pytorch/pytorch/blob/master/torch/utils/data/dataloader.py#L437</a>\n\nI see there are 2 workarounds.\n\nThe Dataloader code can be fi&hellip;'], 'answer_start': [526]}
Pytorch 1.2 Windows
Anyone know where i can get the PyTorch v1.2? Conda has 1.1 for Windows but 1.2 for Mac, Linux
3
2019-08-09T09:39:28.227Z
Pytorch just got updated for windows on conda. [image]
0
2019-08-10T16:00:34.202Z
https://discuss.pytorch.org/t/pytorch-1-2-windows/52959/9
Pytorch just got updated for windows on conda. [image] You could use the staticmethod get_params to apply the same “random” transformation via: img = transforms.ToPILImage()(torch.randn(3, 224, 224)) color_jitter = transforms.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1) transform = transforms.ColorJitter.get_params( color_jit&hellip; Thank you very much for this great code to reproduce this issue! Indeed the memory is growing in each epoch. After looking into the code, I think the reason is that you might track the computation graph in self.running_mean and self.running_covar unintentionally. This might be the case if you ass&hellip;
1,906
{'text': ['Pytorch just got updated for windows on conda.\n\n[image]'], 'answer_start': [1906]}
Pytorch color jitter
From the documentation: “brightness_factor is chosen uniformly from [max(0, 1 - brightness), 1 + brightness]” brightness by default is set to 0. This means that the brightness factor is chosen uniformly from [1, 1] meaning that brightness factor=1. The other parameters (contrast, saturation, hue) a&hellip;
0
2020-05-25T05:24:09.606Z
You could use the staticmethod get_params to apply the same “random” transformation via: img = transforms.ToPILImage()(torch.randn(3, 224, 224)) color_jitter = transforms.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1) transform = transforms.ColorJitter.get_params( color_jit&hellip;
2
2020-05-25T05:51:46.473Z
https://discuss.pytorch.org/t/pytorch-color-jitter/82769/2
Pytorch just got updated for windows on conda. [image] You could use the staticmethod get_params to apply the same “random” transformation via: img = transforms.ToPILImage()(torch.randn(3, 224, 224)) color_jitter = transforms.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1) transform = transforms.ColorJitter.get_params( color_jit&hellip; Thank you very much for this great code to reproduce this issue! Indeed the memory is growing in each epoch. After looking into the code, I think the reason is that you might track the computation graph in self.running_mean and self.running_covar unintentionally. This might be the case if you ass&hellip;
1,009
{'text': ['You could use the staticmethod get_params to apply the same “random” transformation via:\n\nimg = transforms.ToPILImage()(torch.randn(3, 224, 224))\n\ncolor_jitter = transforms.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1)\n\ntransform = transforms.ColorJitter.get_params(\n\ncolor_jit&hellip;'], 'answer_start': [1009]}
How does BatchNorm keeps track of running_mean?
Hi all, I try to implement a custom version of batch norm from scratch (to use a complex-valued network, I already have the other components, just need the batch norm). For now, I want to write it in pure python using Module + Function: <a href="https://pytorch.org/docs/stable/notes/extending.html" class="onebox" target="_blank" rel="nofollow noopener">https://pytorch.org/docs/stable/notes/extending.html</a> To hel&hellip;
0
2019-03-17T16:29:54.268Z
Thank you very much for this great code to reproduce this issue! Indeed the memory is growing in each epoch. After looking into the code, I think the reason is that you might track the computation graph in self.running_mean and self.running_covar unintentionally. This might be the case if you ass&hellip;
2
2019-04-09T00:05:32.705Z
https://discuss.pytorch.org/t/how-does-batchnorm-keeps-track-of-running-mean/40084/16
Pytorch just got updated for windows on conda. [image] You could use the staticmethod get_params to apply the same “random” transformation via: img = transforms.ToPILImage()(torch.randn(3, 224, 224)) color_jitter = transforms.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1) transform = transforms.ColorJitter.get_params( color_jit&hellip; Thank you very much for this great code to reproduce this issue! Indeed the memory is growing in each epoch. After looking into the code, I think the reason is that you might track the computation graph in self.running_mean and self.running_covar unintentionally. This might be the case if you ass&hellip;
363
{'text': ['Thank you very much for this great code to reproduce this issue!\n\nIndeed the memory is growing in each epoch.\n\nAfter looking into the code, I think the reason is that you might track the computation graph in self.running_mean and self.running_covar unintentionally.\n\nThis might be the case if you ass&hellip;'], 'answer_start': [363]}
PyTorch compiled from source for Windows is failing when importing torch
Hello, I am getting this error when compiling PyTorch from source for Windows 10. Since my GPU (GTX Titan Black) has compute capability version 3.5 and PyTorch current binaries are compatible with &gt;= 3.7 I guess I have to compile it from source for getting support for my device. I will try to desc&hellip;
0
2020-04-06T14:55:15.633Z
I guess the cause is somehow related to OpenMP. Would you please add MKL to build to verify my hypothesis? As for the installation script, you can refer to <a href="https://github.com/pytorch/builder/blob/master/windows/build_pytorch.bat#L68-L73" rel="nofollow noopener">https://github.com/pytorch/builder/blob/master/windows/build_pytorch.bat#L68-L73</a>. Update: A fast way to do that verification is to run the fol&hellip;
0
2020-04-15T14:21:23.568Z
https://discuss.pytorch.org/t/pytorch-compiled-from-source-for-windows-is-failing-when-importing-torch/75567/12
I guess the cause is somehow related to OpenMP. Would you please add MKL to build to verify my hypothesis? As for the installation script, you can refer to <a href="https://github.com/pytorch/builder/blob/master/windows/build_pytorch.bat#L68-L73" rel="nofollow noopener">https://github.com/pytorch/builder/blob/master/windows/build_pytorch.bat#L68-L73</a>. Update: A fast way to do that verification is to run the fol&hellip; You could create a method to load your weights. I created a small example for you. You could definitely write the code in a more compact way, so this should be a starter only. :wink: class SSD(nn.Module): def __init__(self, init_weights=True): super(SSD, self).__init__() # ==&hellip; Thanks for the debugging! <a href="https://discuss.pytorch.org/t/update-only-a-middle-layer-of-a-neural-network/35302/4?u=ptrblck">This post</a> might explain the benefits you are seeing.
1,342
{'text': ['I guess the cause is somehow related to OpenMP. Would you please add MKL to build to verify my hypothesis? As for the installation script, you can refer to <a href="https://github.com/pytorch/builder/blob/master/windows/build_pytorch.bat#L68-L73" rel="nofollow noopener">https://github.com/pytorch/builder/blob/master/windows/build_pytorch.bat#L68-L73</a>.\n\nUpdate:\n\nA fast way to do that verification is to run the fol&hellip;'], 'answer_start': [1342]}
Initialization of network using specific (pre-trained) parameters of VGG16
Hi all, I am new to PyTorch (have some good experience in Theano/Lasagne), and I am trying to build an SSD-like architecture. I define the following class (apologies for its length): class SSD(nn.Module): def __init__(self, init_weights=True): super(SSD, self).__init__() # =&hellip;
2
2018-06-01T13:09:39.400Z
You could create a method to load your weights. I created a small example for you. You could definitely write the code in a more compact way, so this should be a starter only. :wink: class SSD(nn.Module): def __init__(self, init_weights=True): super(SSD, self).__init__() # ==&hellip;
2
2018-06-01T14:05:12.006Z
https://discuss.pytorch.org/t/initialization-of-network-using-specific-pre-trained-parameters-of-vgg16/19039/2
I guess the cause is somehow related to OpenMP. Would you please add MKL to build to verify my hypothesis? As for the installation script, you can refer to <a href="https://github.com/pytorch/builder/blob/master/windows/build_pytorch.bat#L68-L73" rel="nofollow noopener">https://github.com/pytorch/builder/blob/master/windows/build_pytorch.bat#L68-L73</a>. Update: A fast way to do that verification is to run the fol&hellip; You could create a method to load your weights. I created a small example for you. You could definitely write the code in a more compact way, so this should be a starter only. :wink: class SSD(nn.Module): def __init__(self, init_weights=True): super(SSD, self).__init__() # ==&hellip; Thanks for the debugging! <a href="https://discuss.pytorch.org/t/update-only-a-middle-layer-of-a-neural-network/35302/4?u=ptrblck">This post</a> might explain the benefits you are seeing.
1,099
{'text': ['You could create a method to load your weights.\n\nI created a small example for you.\n\nYou could definitely write the code in a more compact way, so this should be a starter only. :wink:\n\nclass SSD(nn.Module):\n\ndef __init__(self, init_weights=True):\n\nsuper(SSD, self).__init__()\n\n# ==&hellip;'], 'answer_start': [1099]}
How to turn off gradient during GAN training
I am going through the DCGAN tutorials <a href="https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html" rel="nofollow noopener">tutorials</a>. One question I have is how do you turn off the gradient history tracking for discriminator when you are training the generator. In the tutorial, it is not turned off as shown below. ... # this part trains generator netG.zero_grad() &hellip;
2
2019-03-14T20:02:08.802Z
Thanks for the debugging! <a href="https://discuss.pytorch.org/t/update-only-a-middle-layer-of-a-neural-network/35302/4?u=ptrblck">This post</a> might explain the benefits you are seeing.
0
2019-03-17T15:24:08.134Z
https://discuss.pytorch.org/t/how-to-turn-off-gradient-during-gan-training/39886/9
I guess the cause is somehow related to OpenMP. Would you please add MKL to build to verify my hypothesis? As for the installation script, you can refer to <a href="https://github.com/pytorch/builder/blob/master/windows/build_pytorch.bat#L68-L73" rel="nofollow noopener">https://github.com/pytorch/builder/blob/master/windows/build_pytorch.bat#L68-L73</a>. Update: A fast way to do that verification is to run the fol&hellip; You could create a method to load your weights. I created a small example for you. You could definitely write the code in a more compact way, so this should be a starter only. :wink: class SSD(nn.Module): def __init__(self, init_weights=True): super(SSD, self).__init__() # ==&hellip; Thanks for the debugging! <a href="https://discuss.pytorch.org/t/update-only-a-middle-layer-of-a-neural-network/35302/4?u=ptrblck">This post</a> might explain the benefits you are seeing.
719
{'text': ['Thanks for the debugging!\n\n<a href="https://discuss.pytorch.org/t/update-only-a-middle-layer-of-a-neural-network/35302/4?u=ptrblck">This post</a> might explain the benefits you are seeing.'], 'answer_start': [719]}
Error with LSTM when switching from 0.4 to 1.0 (invalid combination of arguments)
My code was working with 0.4, but later when switching to 1.0, the following bug occurs. Is there any quick fix here? TypeError: lstm() received an invalid combination of arguments - got (Tensor, Tensor, tuple, list, bool, int, float, bool, int), but expected one of: (Tensor data, Tensor batch_si&hellip;
1
2019-01-26T04:10:17.840Z
Hi <a class="mention" href="/u/alexisw">@AlexisW</a>! I faced a similar kind of problem. In my case I had to go through my self-implemented initialization function, that essentially assigns random values to several hyperparameters. The key culprit was in value types. I had to transform all instances of numpy variables to original Python t&hellip;
6
2019-02-11T10:47:28.150Z
https://discuss.pytorch.org/t/error-with-lstm-when-switching-from-0-4-to-1-0-invalid-combination-of-arguments/35629/5
Hi <a class="mention" href="/u/alexisw">@AlexisW</a>! I faced a similar kind of problem. In my case I had to go through my self-implemented initialization function, that essentially assigns random values to several hyperparameters. The key culprit was in value types. I had to transform all instances of numpy variables to original Python t&hellip; As the error message suggests, you would have to push the tensor to the CPU first before converting it to a numpy array via tensor.cpu(). In particular np.array(targets.argmax(1)) seems to raise the error to use: targets = targets.argmax(1).cpu().numpy() instead. PS: you can post code snippets b&hellip; Mac binaries do not ship with CUDA support. Moreover, from <a href="https://support.apple.com/kb/sp623?locale=en_US">this page</a>, it seems your machine don’t have a CUDA GPU anyways. However, your code calls model.cuda(). Hence the error.
1,814
{'text': ['Hi <a class="mention" href="/u/alexisw">@AlexisW</a>!\n\nI faced a similar kind of problem. In my case I had to go through my self-implemented initialization function, that essentially assigns random values to several hyperparameters. The key culprit was in value types. I had to transform all instances of numpy variables to original Python t&hellip;'], 'answer_start': [1814]}
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first
for epoch in range(start_epoch, start_epoch + epochs): print(&#39;\n\n\nEpoch: {}\n&lt;Train&gt;&#39;.format(epoch)) net.train(True) loss = 0 learning_rate = learning_rate * (0.5 ** (epoch // 4)) for param_group in optimizer.param_groups: param_group[&quot;learning_rate&quot;] = learning_rate &hellip;
0
2021-07-13T05:34:42.386Z
As the error message suggests, you would have to push the tensor to the CPU first before converting it to a numpy array via tensor.cpu(). In particular np.array(targets.argmax(1)) seems to raise the error to use: targets = targets.argmax(1).cpu().numpy() instead. PS: you can post code snippets b&hellip;
3
2021-07-13T08:29:31.637Z
https://discuss.pytorch.org/t/typeerror-cant-convert-cuda-0-device-type-tensor-to-numpy-use-tensor-cpu-to-copy-the-tensor-to-host-memory-first/126585/2
Hi <a class="mention" href="/u/alexisw">@AlexisW</a>! I faced a similar kind of problem. In my case I had to go through my self-implemented initialization function, that essentially assigns random values to several hyperparameters. The key culprit was in value types. I had to transform all instances of numpy variables to original Python t&hellip; As the error message suggests, you would have to push the tensor to the CPU first before converting it to a numpy array via tensor.cpu(). In particular np.array(targets.argmax(1)) seems to raise the error to use: targets = targets.argmax(1).cpu().numpy() instead. PS: you can post code snippets b&hellip; Mac binaries do not ship with CUDA support. Moreover, from <a href="https://support.apple.com/kb/sp623?locale=en_US">this page</a>, it seems your machine don’t have a CUDA GPU anyways. However, your code calls model.cuda(). Hence the error.
1,257
{'text': ['As the error message suggests, you would have to push the tensor to the CPU first before converting it to a numpy array via tensor.cpu().\n\nIn particular np.array(targets.argmax(1)) seems to raise the error to use:\n\ntargets = targets.argmax(1).cpu().numpy()\n\ninstead.\n\nPS: you can post code snippets b&hellip;'], 'answer_start': [1257]}
Cannot initialize CUDA without ATen_cuda library
I tried to run my python code, however, I got this error: /Users/Apple/anaconda3/bin/python /Users/Apple/Downloads/linear-regression.py Traceback (most recent call last): File &quot;/Users/Apple/Downloads/linear-regression.py&quot;, line 39, in &lt;module&gt; model.cuda() File &quot;/Users/Apple/anaconda3/lib/p&hellip;
0
2018-09-08T12:54:10.916Z
Mac binaries do not ship with CUDA support. Moreover, from <a href="https://support.apple.com/kb/sp623?locale=en_US">this page</a>, it seems your machine don’t have a CUDA GPU anyways. However, your code calls model.cuda(). Hence the error.
0
2018-09-08T22:25:52.499Z
https://discuss.pytorch.org/t/cannot-initialize-cuda-without-aten-cuda-library/24745/7
Hi <a class="mention" href="/u/alexisw">@AlexisW</a>! I faced a similar kind of problem. In my case I had to go through my self-implemented initialization function, that essentially assigns random values to several hyperparameters. The key culprit was in value types. I had to transform all instances of numpy variables to original Python t&hellip; As the error message suggests, you would have to push the tensor to the CPU first before converting it to a numpy array via tensor.cpu(). In particular np.array(targets.argmax(1)) seems to raise the error to use: targets = targets.argmax(1).cpu().numpy() instead. PS: you can post code snippets b&hellip; Mac binaries do not ship with CUDA support. Moreover, from <a href="https://support.apple.com/kb/sp623?locale=en_US">this page</a>, it seems your machine don’t have a CUDA GPU anyways. However, your code calls model.cuda(). Hence the error.
659
{'text': ['Mac binaries do not ship with CUDA support. Moreover, from <a href="https://support.apple.com/kb/sp623?locale=en_US">this page</a>, it seems your machine don’t have a CUDA GPU anyways. However, your code calls model.cuda(). Hence the error.'], 'answer_start': [659]}
How to use layer norm after con 1d layer?
Cant get what to pass as argument
1
2019-12-29T14:24:21.721Z
I think doing x = torch.randn(1, 3, 6) # batch size 1, 3 channels, 6 length of sequence a = nn.Conv1d(3, 6, 3) # in channels 3, out channels 6, kernel size 3 gn = nn.GroupNorm(1, 6) gn(a(x)) tensor([[[-0.1459, 0.5860, 0.1771, 1.1413], [-0.8613, 2.7552, -1.0135, 0.8898], [-0.1119, -0.1656, -&hellip;
1
2019-12-30T14:51:16.828Z
https://discuss.pytorch.org/t/how-to-use-layer-norm-after-con-1d-layer/65284/8
I think doing x = torch.randn(1, 3, 6) # batch size 1, 3 channels, 6 length of sequence a = nn.Conv1d(3, 6, 3) # in channels 3, out channels 6, kernel size 3 gn = nn.GroupNorm(1, 6) gn(a(x)) tensor([[[-0.1459, 0.5860, 0.1771, 1.1413], [-0.8613, 2.7552, -1.0135, 0.8898], [-0.1119, -0.1656, -&hellip; No it is because your batch_size is only 1. So when you define your hidden state change the input parameter to 1. Could you print the shape out logps[0]? It should be [batch_size, nb_classes]. I also just realized, that you are assigning your Sequential classifier module to model.classifier. If you are using inception_v3, you should use model.fc instead. Here is a minimal code snippet which should work: mod&hellip;
1,798
{'text': ['I think doing\n\nx = torch.randn(1, 3, 6) # batch size 1, 3 channels, 6 length of sequence\n\na = nn.Conv1d(3, 6, 3) # in channels 3, out channels 6, kernel size 3\n\ngn = nn.GroupNorm(1, 6)\n\ngn(a(x))\n\ntensor([[[-0.1459, 0.5860, 0.1771, 1.1413],\n\n[-0.8613, 2.7552, -1.0135, 0.8898],\n\n[-0.1119, -0.1656, -&hellip;'], 'answer_start': [1798]}
RuntimeError: input must have 3 dimensions, got 2 LSTM
I am having issues using an LSTM. I get the error RuntimeError: input must have 3 dimensions, got 2. I have looked online, but I am unable to get it to work. I am fairly new to PyTorch, so any help is appreciated. Here is my code: class TeacherUpdated(nn.Module): def __init__(self): # p&hellip;
0
2021-03-07T16:57:07.212Z
No it is because your batch_size is only 1. So when you define your hidden state change the input parameter to 1.
0
2021-03-09T15:49:20.311Z
https://discuss.pytorch.org/t/runtimeerror-input-must-have-3-dimensions-got-2-lstm/113972/20
I think doing x = torch.randn(1, 3, 6) # batch size 1, 3 channels, 6 length of sequence a = nn.Conv1d(3, 6, 3) # in channels 3, out channels 6, kernel size 3 gn = nn.GroupNorm(1, 6) gn(a(x)) tensor([[[-0.1459, 0.5860, 0.1771, 1.1413], [-0.8613, 2.7552, -1.0135, 0.8898], [-0.1119, -0.1656, -&hellip; No it is because your batch_size is only 1. So when you define your hidden state change the input parameter to 1. Could you print the shape out logps[0]? It should be [batch_size, nb_classes]. I also just realized, that you are assigning your Sequential classifier module to model.classifier. If you are using inception_v3, you should use model.fc instead. Here is a minimal code snippet which should work: mod&hellip;
1,211
{'text': ['No it is because your batch_size is only 1. So when you define your hidden state change the input parameter to 1.'], 'answer_start': [1211]}
AttributeError: 'tuple' object has no attribute 'dim' error Transfer learning inception_v3
I’m trying to classify my images using transfer learning with inception_v3 and having an error RuntimeError: cuDNN error: CUDNN_STATUS_BAD_PARAM What would be the reason of this error ? My transform transform = transforms.Compose([ transforms.CenterCrop(1000), transforms.Resize((299,299)), &hellip;
0
2019-02-04T04:19:10.422Z
Could you print the shape out logps[0]? It should be [batch_size, nb_classes]. I also just realized, that you are assigning your Sequential classifier module to model.classifier. If you are using inception_v3, you should use model.fc instead. Here is a minimal code snippet which should work: mod&hellip;
1
2019-02-04T23:10:10.706Z
https://discuss.pytorch.org/t/attributeerror-tuple-object-has-no-attribute-dim-error-transfer-learning-inception-v3/36334/14
I think doing x = torch.randn(1, 3, 6) # batch size 1, 3 channels, 6 length of sequence a = nn.Conv1d(3, 6, 3) # in channels 3, out channels 6, kernel size 3 gn = nn.GroupNorm(1, 6) gn(a(x)) tensor([[[-0.1459, 0.5860, 0.1771, 1.1413], [-0.8613, 2.7552, -1.0135, 0.8898], [-0.1119, -0.1656, -&hellip; No it is because your batch_size is only 1. So when you define your hidden state change the input parameter to 1. Could you print the shape out logps[0]? It should be [batch_size, nb_classes]. I also just realized, that you are assigning your Sequential classifier module to model.classifier. If you are using inception_v3, you should use model.fc instead. Here is a minimal code snippet which should work: mod&hellip;
426
{'text': ['Could you print the shape out logps[0]? It should be [batch_size, nb_classes].\n\nI also just realized, that you are assigning your Sequential classifier module to model.classifier.\n\nIf you are using inception_v3, you should use model.fc instead.\n\nHere is a minimal code snippet which should work:\n\nmod&hellip;'], 'answer_start': [426]}
Image Folder with no subfolders
Hi, I am trying to load images from a folder with no subfolders and having the error below. When I try to read an image from the folder I can load .jpg file with no error. My file format meets the pytorch image format criteria. Also, transforms.Compose just works fine (this must mean nothing wrong &hellip;
0
2018-10-24T10:32:34.585Z
I decided to come up with a model have classes 0, 1 or 2 (2 for 2 or more people) and moved my images to data folder using this code.It took seconds i = 0 import shutil from pathlib import Path for i in range(len(filenames)): if train_test[i] != &#39;0&#39;: my_file = Path(&#39;images/&#39;+ filenames&hellip;
0
2018-11-11T00:34:50.416Z
https://discuss.pytorch.org/t/image-folder-with-no-subfolders/27930/16
I decided to come up with a model have classes 0, 1 or 2 (2 for 2 or more people) and moved my images to data folder using this code.It took seconds i = 0 import shutil from pathlib import Path for i in range(len(filenames)): if train_test[i] != &#39;0&#39;: my_file = Path(&#39;images/&#39;+ filenames&hellip; class View(nn.Module): def __init__(self, shape): super().__init__() self.shape = shape def __repr__(self): return f&#39;View{self.shape}&#39; def forward(self, input): &#39;&#39;&#39; Reshapes the input according to the shape saved in the view data structure. &hellip; If you just have conv/linear layers, you could use this – <a href="https://github.com/cybertronai/autograd-hacks#per-example-gradients" rel="nofollow noopener">https://github.com/cybertronai/autograd-hacks#per-example-gradients</a>
1,468
{'text': ['I decided to come up with a model have classes 0, 1 or 2 (2 for 2 or more people) and moved my images to data folder using this code.It took seconds\n\ni = 0\n\nimport shutil\n\nfrom pathlib import Path\n\nfor i in range(len(filenames)):\n\nif train_test[i] != &#39;0&#39;:\n\nmy_file = Path(&#39;images/&#39;+ filenames&hellip;'], 'answer_start': [1468]}
How to build a view layer in Pytorch for Sequential Models?
How to build a view layer in Pytorch for Sequential Models? Is this ok: class View(nn.Module): def forward(self, input, shape): return input.view(*shape) I tried it based on the flatten layer but I couldn’t even make the flatten layer work: import torch import torch.nn as nn ## Q: w&hellip;
1
2019-08-21T16:57:52.849Z
class View(nn.Module): def __init__(self, shape): super().__init__() self.shape = shape def __repr__(self): return f&#39;View{self.shape}&#39; def forward(self, input): &#39;&#39;&#39; Reshapes the input according to the shape saved in the view data structure. &hellip;
1
2019-09-18T16:39:05.802Z
https://discuss.pytorch.org/t/how-to-build-a-view-layer-in-pytorch-for-sequential-models/53958/12
I decided to come up with a model have classes 0, 1 or 2 (2 for 2 or more people) and moved my images to data folder using this code.It took seconds i = 0 import shutil from pathlib import Path for i in range(len(filenames)): if train_test[i] != &#39;0&#39;: my_file = Path(&#39;images/&#39;+ filenames&hellip; class View(nn.Module): def __init__(self, shape): super().__init__() self.shape = shape def __repr__(self): return f&#39;View{self.shape}&#39; def forward(self, input): &#39;&#39;&#39; Reshapes the input according to the shape saved in the view data structure. &hellip; If you just have conv/linear layers, you could use this – <a href="https://github.com/cybertronai/autograd-hacks#per-example-gradients" rel="nofollow noopener">https://github.com/cybertronai/autograd-hacks#per-example-gradients</a>
1,051
{'text': ['class View(nn.Module):\n\ndef __init__(self, shape):\n\nsuper().__init__()\n\nself.shape = shape\n\ndef __repr__(self):\n\nreturn f&#39;View{self.shape}&#39;\n\ndef forward(self, input):\n\n&#39;&#39;&#39;\n\nReshapes the input according to the shape saved in the view data structure.\n\n&hellip;'], 'answer_start': [1051]}
How to efficiently compute gradient for each training sample?
Hi folks, There is a problem that has bothered me for quite a long time. Assume we are minimizing a loss function [WechatIMG4] parameterized by [WechatIMG3], on samples [WechatIMG5] using SGD, where M is the mini-batch size. Since the PyTorch autograd can only be implicitly created for scalar outpu&hellip;
2
2019-11-04T23:48:01.962Z
If you just have conv/linear layers, you could use this – <a href="https://github.com/cybertronai/autograd-hacks#per-example-gradients" rel="nofollow noopener">https://github.com/cybertronai/autograd-hacks#per-example-gradients</a>
6
2019-11-05T07:10:12.894Z
https://discuss.pytorch.org/t/how-to-efficiently-compute-gradient-for-each-training-sample/60001/4
I decided to come up with a model have classes 0, 1 or 2 (2 for 2 or more people) and moved my images to data folder using this code.It took seconds i = 0 import shutil from pathlib import Path for i in range(len(filenames)): if train_test[i] != &#39;0&#39;: my_file = Path(&#39;images/&#39;+ filenames&hellip; class View(nn.Module): def __init__(self, shape): super().__init__() self.shape = shape def __repr__(self): return f&#39;View{self.shape}&#39; def forward(self, input): &#39;&#39;&#39; Reshapes the input according to the shape saved in the view data structure. &hellip; If you just have conv/linear layers, you could use this – <a href="https://github.com/cybertronai/autograd-hacks#per-example-gradients" rel="nofollow noopener">https://github.com/cybertronai/autograd-hacks#per-example-gradients</a>
596
{'text': ['If you just have conv/linear layers, you could use this – <a href="https://github.com/cybertronai/autograd-hacks#per-example-gradients" rel="nofollow noopener">https://github.com/cybertronai/autograd-hacks#per-example-gradients</a>'], 'answer_start': [596]}
Error in torch.nn.DataParallel
Hi. I would like to use “DataParallel” in DNN training in Pytorch but get some errors. Before I use “DataParallel”, the code is; &lt;Code 1&gt; for epoch in range(epochs): train_loss = 0.0 val_loss = 0.0 train_loader2 = MakeDataset(file_x_train, file_y_mask_train, tmpbatch_size, shuffl&hellip;
1
2019-12-16T07:02:08.763Z
So are you multiplying the batch size by the number of GPUs (9)? nn.DataParallel will chunk the batch in dim0 and send each piece to a GPU. Since you get [10, 396] inside the forward method for a single GPU as well as for multiple GPUs using nn.DataParallel, your provided batch should have the sha&hellip;
1
2019-12-16T08:07:58.883Z
https://discuss.pytorch.org/t/error-in-torch-nn-dataparallel/64164/9
So are you multiplying the batch size by the number of GPUs (9)? nn.DataParallel will chunk the batch in dim0 and send each piece to a GPU. Since you get [10, 396] inside the forward method for a single GPU as well as for multiple GPUs using nn.DataParallel, your provided batch should have the sha&hellip; The <a href="https://en.wikipedia.org/wiki/Shared_memory">Wikipedia article</a> explains shared memory maybe a bit easier to understand. It’s basically a memory pool, which can be used by multiple processes to exchange information and data. h and c are not learned parameters. Check this example please: <a href="http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging" class="onebox" target="_blank" rel="nofollow noopener">http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging</a>
1,654
{'text': ['So are you multiplying the batch size by the number of GPUs (9)?\n\nnn.DataParallel will chunk the batch in dim0 and send each piece to a GPU.\n\nSince you get [10, 396] inside the forward method for a single GPU as well as for multiple GPUs using nn.DataParallel, your provided batch should have the sha&hellip;'], 'answer_start': [1654]}
What is the shared memory?
hi I am trying to train a model using multiprocessing. In the example below (<a href="https://pytorch.org/docs/1.6.0/notes/multiprocessing.html?highlight=multiprocessing" class="inline-onebox" rel="noopener nofollow ugc">Multiprocessing best practices — PyTorch 1.6.0 documentation</a>), model.share_memory() is used. import torch.multiprocessing as mp from model import MyModel def train(model): # Construct data_loader, optimizer, etc. &hellip;
1
2021-02-18T03:25:36.191Z
The <a href="https://en.wikipedia.org/wiki/Shared_memory">Wikipedia article</a> explains shared memory maybe a bit easier to understand. It’s basically a memory pool, which can be used by multiple processes to exchange information and data.
0
2021-02-18T06:04:14.170Z
https://discuss.pytorch.org/t/what-is-the-shared-memory/112212/2
So are you multiplying the batch size by the number of GPUs (9)? nn.DataParallel will chunk the batch in dim0 and send each piece to a GPU. Since you get [10, 396] inside the forward method for a single GPU as well as for multiple GPUs using nn.DataParallel, your provided batch should have the sha&hellip; The <a href="https://en.wikipedia.org/wiki/Shared_memory">Wikipedia article</a> explains shared memory maybe a bit easier to understand. It’s basically a memory pool, which can be used by multiple processes to exchange information and data. h and c are not learned parameters. Check this example please: <a href="http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging" class="onebox" target="_blank" rel="nofollow noopener">http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging</a>
1,136
{'text': ['The <a href="https://en.wikipedia.org/wiki/Shared_memory">Wikipedia article</a> explains shared memory maybe a bit easier to understand.\n\nIt’s basically a memory pool, which can be used by multiple processes to exchange information and data.'], 'answer_start': [1136]}
Dropout for LSTM state transitions
Hi, I was experimenting with LSTMs and noted that the dropout was applied at the output of the LSTMs like in the figure in the left below . I was wondering if it is possible to apply the dropout at the state transitions instead like on the right. <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/2/21e00b5df67dadcc75d1105b6bbff2cc9a279ea5.png" data-download-href="https://discuss.pytorch.org/uploads/default/21e00b5df67dadcc75d1105b6bbff2cc9a279ea5" title="D.png">[D]</a>
1
2018-04-27T12:59:07.149Z
h and c are not learned parameters. Check this example please: <a href="http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging" class="onebox" target="_blank" rel="nofollow noopener">http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging</a>
1
2018-05-02T22:02:25.492Z
https://discuss.pytorch.org/t/dropout-for-lstm-state-transitions/17112/14
So are you multiplying the batch size by the number of GPUs (9)? nn.DataParallel will chunk the batch in dim0 and send each piece to a GPU. Since you get [10, 396] inside the forward method for a single GPU as well as for multiple GPUs using nn.DataParallel, your provided batch should have the sha&hellip; The <a href="https://en.wikipedia.org/wiki/Shared_memory">Wikipedia article</a> explains shared memory maybe a bit easier to understand. It’s basically a memory pool, which can be used by multiple processes to exchange information and data. h and c are not learned parameters. Check this example please: <a href="http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging" class="onebox" target="_blank" rel="nofollow noopener">http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging</a>
551
{'text': ['h and c are not learned parameters. Check this example please:\n\n<a href="http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging" class="onebox" target="_blank" rel="nofollow noopener">http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#example-an-lstm-for-part-of-speech-tagging</a>'], 'answer_start': [551]}
CPU RAM usage increasing for every epoch
Hello, I’m running into troubles while training a CAE(Convolutional Auto Encoder) model. I defined my own dataset class as follows: def make_dataset(dir, class_to_idx, extensions): label_list = [] input_list = [] dir = os.path.expanduser(dir) for target in sorted(class_to_idx.keys(&hellip;
0
2018-09-04T12:44:21.221Z
<a class="mention" href="/u/kunasiramesh">@kunasiramesh</a>, <a class="mention" href="/u/gkv">@Gkv</a> The memory issue might be related to the training procedure or another part of the code. Could you post the code so that we can have a look? Usually the computation graph is unintentionally stored somewhere, e.g. by using losses += loss instead of losses += loss.item().
11
2018-09-12T09:55:12.270Z
https://discuss.pytorch.org/t/cpu-ram-usage-increasing-for-every-epoch/24475/6
<a class="mention" href="/u/kunasiramesh">@kunasiramesh</a>, <a class="mention" href="/u/gkv">@Gkv</a> The memory issue might be related to the training procedure or another part of the code. Could you post the code so that we can have a look? Usually the computation graph is unintentionally stored somewhere, e.g. by using losses += loss instead of losses += loss.item(). Hi, I think the main worry you have with using Variables everywere is the overhead it could imply compared to use pure Tensors directly? This has been looked into in details and in the current master branch, the overhead of using a Variable (with requires_grad=True or torch.no_grad()) is negligible&hellip; So the immediate takeaway from the above discussion is replace time.time with time.perf_counter() have a torch.cuda.synchronize() before taking the the start_time, maybe don’t take the first batch (of a given size)
1,826
{'text': ['<a class="mention" href="/u/kunasiramesh">@kunasiramesh</a>, <a class="mention" href="/u/gkv">@Gkv</a> The memory issue might be related to the training procedure or another part of the code.\n\nCould you post the code so that we can have a look?\n\nUsually the computation graph is unintentionally stored somewhere, e.g. by using losses += loss instead of losses += loss.item().'], 'answer_start': [1826]}
Is that Possible for Pytorch to Provide Convolution Function For Purely Tensors (Not Variables)? Important on Training Inference Based Unsupervised Learning Models
Here is my code: filters = torch.randn(8,4,3,3).cuda() inputs = torch.randn(1,4,5,5).cuda() torch.nn.functional.conv2d(inputs, filters, padding=1) Error: TypeError: argument 0 is not a Variable. I don’t want to use autograd.Variable to wrap my tensors and treat each computation as a node, even it&hellip;
1
2018-02-28T23:28:23.999Z
Hi, I think the main worry you have with using Variables everywere is the overhead it could imply compared to use pure Tensors directly? This has been looked into in details and in the current master branch, the overhead of using a Variable (with requires_grad=True or torch.no_grad()) is negligible&hellip;
0
2018-03-02T12:56:23.175Z
https://discuss.pytorch.org/t/is-that-possible-for-pytorch-to-provide-convolution-function-for-purely-tensors-not-variables-important-on-training-inference-based-unsupervised-learning-models/14161/11
<a class="mention" href="/u/kunasiramesh">@kunasiramesh</a>, <a class="mention" href="/u/gkv">@Gkv</a> The memory issue might be related to the training procedure or another part of the code. Could you post the code so that we can have a look? Usually the computation graph is unintentionally stored somewhere, e.g. by using losses += loss instead of losses += loss.item(). Hi, I think the main worry you have with using Variables everywere is the overhead it could imply compared to use pure Tensors directly? This has been looked into in details and in the current master branch, the overhead of using a Variable (with requires_grad=True or torch.no_grad()) is negligible&hellip; So the immediate takeaway from the above discussion is replace time.time with time.perf_counter() have a torch.cuda.synchronize() before taking the the start_time, maybe don’t take the first batch (of a given size)
1,289
{'text': ['Hi,\n\nI think the main worry you have with using Variables everywere is the overhead it could imply compared to use pure Tensors directly? This has been looked into in details and in the current master branch, the overhead of using a Variable (with requires_grad=True or torch.no_grad()) is negligible&hellip;'], 'answer_start': [1289]}
Why time.time() in python is inaccurte?
I think it is an elementary question about programming with GPU. First, i tried to use time.time() in python module, to measure the operation time of some modules in NNs. such as def forward(self, x): end = time.time() output1 = self.layer1(x) time_output1 = time.time() output2 = self.lay&hellip;
0
2020-08-27T08:59:53.275Z
So the immediate takeaway from the above discussion is replace time.time with time.perf_counter() have a torch.cuda.synchronize() before taking the the start_time, maybe don’t take the first batch (of a given size)
3
2020-08-28T09:58:29.153Z
https://discuss.pytorch.org/t/why-time-time-in-python-is-inaccurte/94274/8
<a class="mention" href="/u/kunasiramesh">@kunasiramesh</a>, <a class="mention" href="/u/gkv">@Gkv</a> The memory issue might be related to the training procedure or another part of the code. Could you post the code so that we can have a look? Usually the computation graph is unintentionally stored somewhere, e.g. by using losses += loss instead of losses += loss.item(). Hi, I think the main worry you have with using Variables everywere is the overhead it could imply compared to use pure Tensors directly? This has been looked into in details and in the current master branch, the overhead of using a Variable (with requires_grad=True or torch.no_grad()) is negligible&hellip; So the immediate takeaway from the above discussion is replace time.time with time.perf_counter() have a torch.cuda.synchronize() before taking the the start_time, maybe don’t take the first batch (of a given size)
685
{'text': ['So the immediate takeaway from the above discussion is\n\nreplace time.time with time.perf_counter()\n\nhave a torch.cuda.synchronize() before taking the the start_time,\n\nmaybe don’t take the first batch (of a given size)'], 'answer_start': [685]}
How to preserve backward grad_fn after distributed operations
I am trying to implement model parallelism in a distributed cluster setting. Let’s say I have a tensor tensor in each process and a number of operations have been performed on it (in each process independently). The tensor has a .grad_fn attached to it. Now I want to perform an all_gather. so that &hellip;
2
2019-06-30T14:40:51.982Z
I’ve built this package that does this automatically now: <a href="https://github.com/ag14774/diffdist" rel="nofollow noopener">https://github.com/ag14774/diffdist</a>. So this question can be marked as solved
0
2019-08-28T11:43:55.997Z
https://discuss.pytorch.org/t/how-to-preserve-backward-grad-fn-after-distributed-operations/49343/4
I’ve built this package that does this automatically now: <a href="https://github.com/ag14774/diffdist" rel="nofollow noopener">https://github.com/ag14774/diffdist</a>. So this question can be marked as solved How would you like to ignore the class in your one-hot encoded tensor? Do you want to remove it completely? This code should just remove the unwanted class channel: batch_size = 10 n_classes = 5 h, w = 24, 24 labels = torch.empty(batch_size, 1, h, w, dtype=torch.long).random_(n_classes) one_hot =&hellip; please try set CUDAHOSTCXX=
1,804
{'text': ['I’ve built this package that does this automatically now: <a href="https://github.com/ag14774/diffdist" rel="nofollow noopener">https://github.com/ag14774/diffdist</a>. So this question can be marked as solved'], 'answer_start': [1804]}
Make one hot encoding with ignore label for semantic segmentation?
Hello all, I want to make one hot encoding with ignoring label for semantic segmentation. My labels has 22 values from 0 to 20 and one value is 255, called an ignored label. I want to convert the labels to one-hot encoding without considering the ignored label. def make_one_hot(labels, num_classes)&hellip;
1
2018-05-15T08:12:43.374Z
How would you like to ignore the class in your one-hot encoded tensor? Do you want to remove it completely? This code should just remove the unwanted class channel: batch_size = 10 n_classes = 5 h, w = 24, 24 labels = torch.empty(batch_size, 1, h, w, dtype=torch.long).random_(n_classes) one_hot =&hellip;
4
2018-05-16T20:02:36.850Z
https://discuss.pytorch.org/t/make-one-hot-encoding-with-ignore-label-for-semantic-segmentation/18126/14
I’ve built this package that does this automatically now: <a href="https://github.com/ag14774/diffdist" rel="nofollow noopener">https://github.com/ag14774/diffdist</a>. So this question can be marked as solved How would you like to ignore the class in your one-hot encoded tensor? Do you want to remove it completely? This code should just remove the unwanted class channel: batch_size = 10 n_classes = 5 h, w = 24, 24 labels = torch.empty(batch_size, 1, h, w, dtype=torch.long).random_(n_classes) one_hot =&hellip; please try set CUDAHOSTCXX=
1,112
{'text': ['How would you like to ignore the class in your one-hot encoded tensor?\n\nDo you want to remove it completely?\n\nThis code should just remove the unwanted class channel:\n\nbatch_size = 10\n\nn_classes = 5\n\nh, w = 24, 24\n\nlabels = torch.empty(batch_size, 1, h, w, dtype=torch.long).random_(n_classes)\n\none_hot =&hellip;'], 'answer_start': [1112]}
PyTorch build from source on Windows
I’m trying to build PyTorch from source on Windows 10 (as described in pytorch repo), and I’m getting an error: Building wheel torch-1.1.0a0+542c273 -- Building version 1.1.0a0+542c273 Microsoft (R) Build Engine 15.9.21+g9802d43bc3 dla platformy .NET Framework Copyright (C) Microsoft Corporation. W&hellip;
0
2019-03-19T13:50:28.720Z
please try set CUDAHOSTCXX=
1
2019-03-21T17:00:58.357Z
https://discuss.pytorch.org/t/pytorch-build-from-source-on-windows/40288/16
I’ve built this package that does this automatically now: <a href="https://github.com/ag14774/diffdist" rel="nofollow noopener">https://github.com/ag14774/diffdist</a>. So this question can be marked as solved How would you like to ignore the class in your one-hot encoded tensor? Do you want to remove it completely? This code should just remove the unwanted class channel: batch_size = 10 n_classes = 5 h, w = 24, 24 labels = torch.empty(batch_size, 1, h, w, dtype=torch.long).random_(n_classes) one_hot =&hellip; please try set CUDAHOSTCXX=
523
{'text': ['please try\n\nset CUDAHOSTCXX='], 'answer_start': [523]}
The kernel appears to have died. It will restart automatically
I facing a common problem when loading pre-training model using PyTorch. Jupyter notebook is crashing “The kernel appears to have died. It will restart automatically” I have followed the discussion <a href="https://github.com/jupyter/notebook/issues/2784" rel="noopener nofollow ugc">link</a>, <a href="https://github.com/tensorflow/tensorflow/issues/9829" rel="noopener nofollow ugc">link</a>, and <a href="https://stackoverflow.com/questions/47022997/jupyter-the-kernel-appears-to-have-died-it-will-restart-automatically" rel="noopener nofollow ugc">link</a> but not fix, any suggestions? The environment specifications as follows: OS : &hellip;
0
2020-11-12T10:15:39.407Z
I had this same issue on a pytorch install on an older notebook with only 2 gigs of ram when I was running torch 1.4.0. I removed 1.4.0 and replaced it with 1.1.0. This config behaved perfectly. I might also add that I am having the same problem on the notebook, when trying to import Tensorflow2
1
2020-11-25T22:37:31.671Z
https://discuss.pytorch.org/t/the-kernel-appears-to-have-died-it-will-restart-automatically/102533/9
I had this same issue on a pytorch install on an older notebook with only 2 gigs of ram when I was running torch 1.4.0. I removed 1.4.0 and replaced it with 1.1.0. This config behaved perfectly. I might also add that I am having the same problem on the notebook, when trying to import Tensorflow2 requires_grad you are missing an “s” search for named_parameters… here’s how it’s done, print(&#39;Training these layers&#39;) for name,param in model.named_parameters(): if param.requires_grad is True: print(name, param.requires_grad) I hope you can flip the requires_grad as per the need… Secondly don’t pass model.parameters() &hellip;
1,102
{'text': ['I had this same issue on a pytorch install on an older notebook with only 2 gigs of ram when I was running torch 1.4.0. I removed 1.4.0 and replaced it with 1.1.0. This config behaved perfectly. I might also add that I am having the same problem on the notebook, when trying to import Tensorflow2'], 'answer_start': [1102]}
Why is it when I call require_grad = False on all my params my weights in the network would still update?
What I am trying to do right now is to write a multi layer conv2d encoder and freeze the weights from updating for the earlier layers. This hopefully would give me back a similar effect like progressively growing the layers. This way I can initialize the complete network first without worrying about&hellip;
0
2018-07-31T17:43:14.969Z
requires_grad you are missing an “s”
1
2018-08-01T17:23:16.826Z
https://discuss.pytorch.org/t/why-is-it-when-i-call-require-grad-false-on-all-my-params-my-weights-in-the-network-would-still-update/22126/6
I had this same issue on a pytorch install on an older notebook with only 2 gigs of ram when I was running torch 1.4.0. I removed 1.4.0 and replaced it with 1.1.0. This config behaved perfectly. I might also add that I am having the same problem on the notebook, when trying to import Tensorflow2 requires_grad you are missing an “s” search for named_parameters… here’s how it’s done, print(&#39;Training these layers&#39;) for name,param in model.named_parameters(): if param.requires_grad is True: print(name, param.requires_grad) I hope you can flip the requires_grad as per the need… Secondly don’t pass model.parameters() &hellip;
848
{'text': ['requires_grad\n\nyou are missing an “s”'], 'answer_start': [848]}
How to pass certain layers weights in the optimizer
lets say i have a model from torchvision import models model = models.resnet18(pretrained=True) Now i freeze all the layers for param in model.parameters(): param.requires_grad = False and then pass all the models parameters in the optimizer optimizer = optim.Adam(model.parameters() , lr=0.1) &hellip;
1
2019-07-03T11:54:09.019Z
search for named_parameters… here’s how it’s done, print(&#39;Training these layers&#39;) for name,param in model.named_parameters(): if param.requires_grad is True: print(name, param.requires_grad) I hope you can flip the requires_grad as per the need… Secondly don’t pass model.parameters() &hellip;
1
2019-07-04T05:17:33.780Z
https://discuss.pytorch.org/t/how-to-pass-certain-layers-weights-in-the-optimizer/49605/8
I had this same issue on a pytorch install on an older notebook with only 2 gigs of ram when I was running torch 1.4.0. I removed 1.4.0 and replaced it with 1.1.0. This config behaved perfectly. I might also add that I am having the same problem on the notebook, when trying to import Tensorflow2 requires_grad you are missing an “s” search for named_parameters… here’s how it’s done, print(&#39;Training these layers&#39;) for name,param in model.named_parameters(): if param.requires_grad is True: print(name, param.requires_grad) I hope you can flip the requires_grad as per the need… Secondly don’t pass model.parameters() &hellip;
335
{'text': ['search for named_parameters…\n\nhere’s how it’s done,\n\nprint(&#39;Training these layers&#39;)\n\nfor name,param in model.named_parameters():\n\nif param.requires_grad is True:\n\nprint(name, param.requires_grad)\n\nI hope you can flip the requires_grad as per the need…\n\nSecondly don’t pass model.parameters() &hellip;'], 'answer_start': [335]}
MNIST server down
Hello together, can someone confirm, that the server for downloading MNIST dataset is down? I cannot access the dataset by the dataloader. The following message is printed: Traceback (most recent call last): File &quot;/opt/conda/lib/python3.8/threading.py&quot;, line 932, in _bootstrap_inner self.run&hellip;
1
2021-03-11T09:25:42.585Z
If the version of torchvision is 0.9.0, which is currently stable, being unable to download MNIST is (unfortunately) expected, but if the version is nightly, it’s not expected.
0
2021-03-17T10:43:12.107Z
https://discuss.pytorch.org/t/mnist-server-down/114433/10
If the version of torchvision is 0.9.0, which is currently stable, being unable to download MNIST is (unfortunately) expected, but if the version is nightly, it’s not expected. I’m not sure, why it’s not working. If you would like to visualize both probability maps for the two classes, your code should work: plt.figure() plt.imshow(torch.exp(outputs[0,0,:,:]).detach().cpu()) # plot class0 plt.figure() plt.imshow(torch.exp(outputs[0,1,:,:]).detach().cpu()) # plot class1&hellip; Just curious here. What’s the benefit of doing another forward pass on the result? result = self.model(input_var.view(-1, c, h, w)) # fuse batch size and ncrops output = self.model(result)
1,286
{'text': ['If the version of torchvision is 0.9.0, which is currently stable, being unable to download MNIST is (unfortunately) expected, but if the version is nightly, it’s not expected.'], 'answer_start': [1286]}
How can I display a test image and display the mask for it based on my trained model?
I am wondering how I can test the trained model for semantic segmentation and visualise the mask for the test image. There is an example for classification problem in Pytorch but couldn’t find any obvious example for the segmentation. I found <a href="https://medium.com/@tsakunelsonz/loading-and-training-a-neural-network-with-custom-dataset-via-transfer-learning-in-pytorch-8e672933469" rel="nofollow noopener">this page</a> that test the network, but it’s for classifica&hellip;
0
2018-12-25T12:41:36.913Z
I’m not sure, why it’s not working. If you would like to visualize both probability maps for the two classes, your code should work: plt.figure() plt.imshow(torch.exp(outputs[0,0,:,:]).detach().cpu()) # plot class0 plt.figure() plt.imshow(torch.exp(outputs[0,1,:,:]).detach().cpu()) # plot class1&hellip;
3
2018-12-30T12:42:51.924Z
https://discuss.pytorch.org/t/how-can-i-display-a-test-image-and-display-the-mask-for-it-based-on-my-trained-model/33016/8
If the version of torchvision is 0.9.0, which is currently stable, being unable to download MNIST is (unfortunately) expected, but if the version is nightly, it’s not expected. I’m not sure, why it’s not working. If you would like to visualize both probability maps for the two classes, your code should work: plt.figure() plt.imshow(torch.exp(outputs[0,0,:,:]).detach().cpu()) # plot class0 plt.figure() plt.imshow(torch.exp(outputs[0,1,:,:]).detach().cpu()) # plot class1&hellip; Just curious here. What’s the benefit of doing another forward pass on the result? result = self.model(input_var.view(-1, c, h, w)) # fuse batch size and ncrops output = self.model(result)
820
{'text': ['I’m not sure, why it’s not working.\n\nIf you would like to visualize both probability maps for the two classes, your code should work:\n\nplt.figure()\n\nplt.imshow(torch.exp(outputs[0,0,:,:]).detach().cpu()) # plot class0\n\nplt.figure()\n\nplt.imshow(torch.exp(outputs[0,1,:,:]).detach().cpu()) # plot class1&hellip;'], 'answer_start': [820]}
Expected 4-dimensional input for 4-dimensional weight [64, 20, 7, 7], but got input of size [30, 9] instead
Long shot but can anyone please tell me what I am doing wrong? Thank you. I am using the slightly modified version of: <a href="https://github.com/jeffreyhuang1/two-stream-action-recognition" rel="nofollow noopener">https://github.com/jeffreyhuang1/two-stream-action-recognition</a> Everything worked perfectly fine (albeit bad performance) with random crop for training and centercrop for testing/v&hellip;
1
2019-03-26T08:47:40.157Z
Just curious here. What’s the benefit of doing another forward pass on the result? result = self.model(input_var.view(-1, c, h, w)) # fuse batch size and ncrops output = self.model(result)
1
2019-03-26T10:41:51.833Z
https://discuss.pytorch.org/t/expected-4-dimensional-input-for-4-dimensional-weight-64-20-7-7-but-got-input-of-size-30-9-instead/40903/7
If the version of torchvision is 0.9.0, which is currently stable, being unable to download MNIST is (unfortunately) expected, but if the version is nightly, it’s not expected. I’m not sure, why it’s not working. If you would like to visualize both probability maps for the two classes, your code should work: plt.figure() plt.imshow(torch.exp(outputs[0,0,:,:]).detach().cpu()) # plot class0 plt.figure() plt.imshow(torch.exp(outputs[0,1,:,:]).detach().cpu()) # plot class1&hellip; Just curious here. What’s the benefit of doing another forward pass on the result? result = self.model(input_var.view(-1, c, h, w)) # fuse batch size and ncrops output = self.model(result)
489
{'text': ['Just curious here. What’s the benefit of doing another forward pass on the result?\n\nresult = self.model(input_var.view(-1, c, h, w)) # fuse batch size and ncrops\n\noutput = self.model(result)'], 'answer_start': [489]}
Training phase of Leave-One-Out Cross Validation
I am not sure how to make the following code work as expected–maybe it is working correctly actually. Can you please refer to the line number with a fix or suggestion? line numbers can be seen here: <a href="https://pastebin.com/Ccy6P0Di" rel="nofollow noopener">https://pastebin.com/Ccy6P0Di</a> Before showing the code, it seems the training is actually working. Ho&hellip;
1
2018-11-21T05:10:48.976Z
In this case, yes, you should pass the entire DataLoader to train_model, since the sampler should make sure you are not sampling the test image. You could create a list before the LOOCV loop and store all probabilities in it additionally to the predictions: loocv_probs = [] for idx in range(nb_sam&hellip;
0
2018-11-26T08:19:20.549Z
https://discuss.pytorch.org/t/training-phase-of-leave-one-out-cross-validation/30138/9
In this case, yes, you should pass the entire DataLoader to train_model, since the sampler should make sure you are not sampling the test image. You could create a list before the LOOCV loop and store all probabilities in it additionally to the predictions: loocv_probs = [] for idx in range(nb_sam&hellip; Sure, the model in the tutorial outputs 10 logits in its last linear layer. As you can see, no non-linearity was used on this layer, so that the values represent the raw logits for all 10 classes. If you call softmax on them, you would get the probabilities for each class, but we don’t want to do t&hellip; Ah, perfect! Yep, this makes sense. Thanks very much for explaining it. I also want to offer a revision to my previous post. Turns out you get very bad model performance if you do that training sub-loop like I was. The correct way to do teacher forcing is just to pass the targets shifted left one. &hellip;
1,358
{'text': ['In this case, yes, you should pass the entire DataLoader to train_model, since the sampler should make sure you are not sampling the test image.\n\nYou could create a list before the LOOCV loop and store all probabilities in it additionally to the predictions:\n\nloocv_probs = []\n\nfor idx in range(nb_sam&hellip;'], 'answer_start': [1358]}
Input and target size mismatch
I am trying to implement one-hot encoding for MNIST imported from Kaggle. The shape of the encoding is [1, 10] but when the loss function runs, it throws the following error: ValueError: Expected input batch_size (10) to match target batch_size (256). My mini batch-size is 256. What should I do?
0
2018-10-03T16:53:36.557Z
Sure, the model in the tutorial outputs 10 logits in its last linear layer. As you can see, no non-linearity was used on this layer, so that the values represent the raw logits for all 10 classes. If you call softmax on them, you would get the probabilities for each class, but we don’t want to do t&hellip;
4
2018-10-04T16:55:06.300Z
https://discuss.pytorch.org/t/input-and-target-size-mismatch/26479/5
In this case, yes, you should pass the entire DataLoader to train_model, since the sampler should make sure you are not sampling the test image. You could create a list before the LOOCV loop and store all probabilities in it additionally to the predictions: loocv_probs = [] for idx in range(nb_sam&hellip; Sure, the model in the tutorial outputs 10 logits in its last linear layer. As you can see, no non-linearity was used on this layer, so that the values represent the raw logits for all 10 classes. If you call softmax on them, you would get the probabilities for each class, but we don’t want to do t&hellip; Ah, perfect! Yep, this makes sense. Thanks very much for explaining it. I also want to offer a revision to my previous post. Turns out you get very bad model performance if you do that training sub-loop like I was. The correct way to do teacher forcing is just to pass the targets shifted left one. &hellip;
989
{'text': ['Sure, the model in the tutorial outputs 10 logits in its last linear layer. As you can see, no non-linearity was used on this layer, so that the values represent the raw logits for all 10 classes.\n\nIf you call softmax on them, you would get the probabilities for each class, but we don’t want to do t&hellip;'], 'answer_start': [989]}
How to use/train Transformer in Pytorch
I followed the tutorial given <a href="https://pytorch.org/tutorials/beginner/transformer_tutorial.html" rel="nofollow noopener">here</a>. However, the <a href="https://pytorch.org/docs/master/_modules/torch/nn/modules/transformer.html#Transformer" rel="nofollow noopener">implementation</a> for Transformer is significantly different in the pytorch codebase. The latter being closer to the the proposed approach by the authors. Can someone guide me how to use the pytorch transformer to do a sequence to sequence translation t&hellip;
1
2020-03-09T15:39:30.278Z
Ah, perfect! Yep, this makes sense. Thanks very much for explaining it. I also want to offer a revision to my previous post. Turns out you get very bad model performance if you do that training sub-loop like I was. The correct way to do teacher forcing is just to pass the targets shifted left one. &hellip;
2
2020-04-09T14:56:05.247Z
https://discuss.pytorch.org/t/how-to-use-train-transformer-in-pytorch/72607/8
In this case, yes, you should pass the entire DataLoader to train_model, since the sampler should make sure you are not sampling the test image. You could create a list before the LOOCV loop and store all probabilities in it additionally to the predictions: loocv_probs = [] for idx in range(nb_sam&hellip; Sure, the model in the tutorial outputs 10 logits in its last linear layer. As you can see, no non-linearity was used on this layer, so that the values represent the raw logits for all 10 classes. If you call softmax on them, you would get the probabilities for each class, but we don’t want to do t&hellip; Ah, perfect! Yep, this makes sense. Thanks very much for explaining it. I also want to offer a revision to my previous post. Turns out you get very bad model performance if you do that training sub-loop like I was. The correct way to do teacher forcing is just to pass the targets shifted left one. &hellip;
619
{'text': ['Ah, perfect! Yep, this makes sense. Thanks very much for explaining it.\n\nI also want to offer a revision to my previous post. Turns out you get very bad model performance if you do that training sub-loop like I was. The correct way to do teacher forcing is just to pass the targets shifted left one. &hellip;'], 'answer_start': [619]}
How to confirm freezing is working?
Hello all, i’m trying to freeze all parameters of my model. “param.required grad = False” is very simple and powerful way that most of developer accept, but i failed to confirm the effect of that. For the simplest test to check whether freezing is works, first i initailze model and assign param.re&hellip;
1
2018-08-08T08:05:21.455Z
Optimizer can’t take parameters with requires_grad=False. It will throw an error. But in your case, you haven’t really freezed the parameters. There is a typo in the code. It should be param.requires_grad=False not param.required_grad. Since your parameters still has requires_grad=True, it won’t thr&hellip;
0
2018-08-08T12:39:28.660Z
https://discuss.pytorch.org/t/how-to-confirm-freezing-is-working/22648/4
Optimizer can’t take parameters with requires_grad=False. It will throw an error. But in your case, you haven’t really freezed the parameters. There is a typo in the code. It should be param.requires_grad=False not param.required_grad. Since your parameters still has requires_grad=True, it won’t thr&hellip; [image] CCL: So far I’ve worked out that the line dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=1, rank=args.rank) intialises the same process on all 8 GPUs The init_process_group API only sets up the process where this function is invoked. And, as t&hellip; [image] KFrank: Assuming batchsize = 4 , nClasses = 5 , H = 224 , and W = 224 , CrossEntropyLoss will be expecting the input (prediction) you give it to be a FloatTensor of shape (4, 5, 244, 244) , and the target (ground truth) to be a LongTensor of shape (4, 244, 244). Dear <a class="mention" href="/u/kfrank">@KFrank</a> you h&hellip;
1,854
{'text': ['Optimizer can’t take parameters with requires_grad=False. It will throw an error. But in your case, you haven’t really freezed the parameters. There is a typo in the code. It should be param.requires_grad=False not param.required_grad. Since your parameters still has requires_grad=True, it won’t thr&hellip;'], 'answer_start': [1854]}
Running on specific GPU device
I’m trying to specify specify which single GPU to run code on within Python code, by setting the GPU index visible to PyTorch. Here’s what I’ve tried: for i in range(8): #8 gpus os.environ[&quot;CUDA_AVAILABLE_DEVICES&quot;] = str(i) print(torch.cuda.device_count()) # this line always outputs 8 (&hellip;
0
2020-07-20T06:48:48.598Z
[image] CCL: So far I’ve worked out that the line dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=1, rank=args.rank) intialises the same process on all 8 GPUs The init_process_group API only sets up the process where this function is invoked. And, as t&hellip;
1
2020-07-20T15:28:06.267Z
https://discuss.pytorch.org/t/running-on-specific-gpu-device/89841/8
Optimizer can’t take parameters with requires_grad=False. It will throw an error. But in your case, you haven’t really freezed the parameters. There is a typo in the code. It should be param.requires_grad=False not param.required_grad. Since your parameters still has requires_grad=True, it won’t thr&hellip; [image] CCL: So far I’ve worked out that the line dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=1, rank=args.rank) intialises the same process on all 8 GPUs The init_process_group API only sets up the process where this function is invoked. And, as t&hellip; [image] KFrank: Assuming batchsize = 4 , nClasses = 5 , H = 224 , and W = 224 , CrossEntropyLoss will be expecting the input (prediction) you give it to be a FloatTensor of shape (4, 5, 244, 244) , and the target (ground truth) to be a LongTensor of shape (4, 244, 244). Dear <a class="mention" href="/u/kfrank">@KFrank</a> you h&hellip;
1,236
{'text': ['[image] CCL:\n\nSo far I’ve worked out that the line dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=1, rank=args.rank) intialises the same process on all 8 GPUs\n\nThe init_process_group API only sets up the process where this function is invoked. And, as t&hellip;'], 'answer_start': [1236]}
Cross Entropy Loss error on image segmentation
I am a new user of Pytorch. I am adapting the Unet segmentation model, but I have an error in the evaluation of the Cross Entropy Loss function during training. I used torch.utils.data.Dataset to build a specific dataset train_data = DataLoaderSegmentation(train_path, mode=‘train’) train_loader&hellip;
0
2019-11-06T16:08:42.146Z
[image] KFrank: Assuming batchsize = 4 , nClasses = 5 , H = 224 , and W = 224 , CrossEntropyLoss will be expecting the input (prediction) you give it to be a FloatTensor of shape (4, 5, 244, 244) , and the target (ground truth) to be a LongTensor of shape (4, 244, 244). Dear <a class="mention" href="/u/kfrank">@KFrank</a> you h&hellip;
1
2019-11-11T18:10:44.529Z
https://discuss.pytorch.org/t/cross-entropy-loss-error-on-image-segmentation/60194/13
Optimizer can’t take parameters with requires_grad=False. It will throw an error. But in your case, you haven’t really freezed the parameters. There is a typo in the code. It should be param.requires_grad=False not param.required_grad. Since your parameters still has requires_grad=True, it won’t thr&hellip; [image] CCL: So far I’ve worked out that the line dist.init_process_group(backend=args.dist_backend, init_method=args.dist_url, world_size=1, rank=args.rank) intialises the same process on all 8 GPUs The init_process_group API only sets up the process where this function is invoked. And, as t&hellip; [image] KFrank: Assuming batchsize = 4 , nClasses = 5 , H = 224 , and W = 224 , CrossEntropyLoss will be expecting the input (prediction) you give it to be a FloatTensor of shape (4, 5, 244, 244) , and the target (ground truth) to be a LongTensor of shape (4, 244, 244). Dear <a class="mention" href="/u/kfrank">@KFrank</a> you h&hellip;
613
{'text': ['[image] KFrank:\n\nAssuming batchsize = 4 , nClasses = 5 , H = 224 , and\n\nW = 224 , CrossEntropyLoss will be expecting the input\n\n(prediction) you give it to be a FloatTensor of shape\n\n(4, 5, 244, 244) , and the target (ground truth) to be a\n\nLongTensor of shape (4, 244, 244).\n\nDear <a class="mention" href="/u/kfrank">@KFrank</a> you h&hellip;'], 'answer_start': [613]}
Change the dimension of tensor
Hi, I have a tensor with dimension [1, 1, 4, 6] like this: a = torch.tensor([[[ 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24]]]) I want to change it to a tensor like this: [[ [[1, 2], &hellip;
0
2019-07-24T03:50:38.872Z
[image] zahra: a = torch.tensor([[[ 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24]]]) Ow, sorry. I used the tensor in the first post. a.unfold(2, 2,2).unfold(3, 2,2).contiguous().view(2, 6, 2, 2) by the way, as I told you only need to work wi&hellip;
0
2019-07-25T20:08:01.217Z
https://discuss.pytorch.org/t/change-the-dimension-of-tensor/51459/12
[image] zahra: a = torch.tensor([[[ 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24]]]) Ow, sorry. I used the tensor in the first post. a.unfold(2, 2,2).unfold(3, 2,2).contiguous().view(2, 6, 2, 2) by the way, as I told you only need to work wi&hellip; OK, so I managed to find a workaround on RAM increasing, the solution was <a href="https://discuss.pytorch.org/t/3d-cnn-models-ensemble/15481/4">your previous suggestion to my other post</a> to do the standardization in the Dataset and return multiple inputs. I think it was because i normalized my inputs inside the wrapper/ensemble model maybe there were variables that we&hellip; These constructs are used to pass a variable amount of arguments to a class instantiation or function in Python. Have a look at <a href="https://www.geeksforgeeks.org/args-kwargs-python/">this explanation</a> for more information.
1,912
{'text': ['[image] zahra:\n\na = torch.tensor([[[ 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24]]])\n\nOw, sorry.\n\nI used the tensor in the first post.\n\na.unfold(2, 2,2).unfold(3, 2,2).contiguous().view(2, 6, 2, 2)\n\nby the way, as I told you only need to work wi&hellip;'], 'answer_start': [1912]}
RAM keep increasing in inference [SOLVED]
Hi all, I’m encountering a problem where my RAM is during inference of multiple models (the GPU memory is released though). I’ve trained 6 models with binary classification and now i’m trying to do inference of all the 6 models one after the other and i’m for some reason my RAM keep increasing lik&hellip;
0
2018-03-28T07:48:17.760Z
OK, so I managed to find a workaround on RAM increasing, the solution was <a href="https://discuss.pytorch.org/t/3d-cnn-models-ensemble/15481/4">your previous suggestion to my other post</a> to do the standardization in the Dataset and return multiple inputs. I think it was because i normalized my inputs inside the wrapper/ensemble model maybe there were variables that we&hellip;
1
2018-03-29T07:14:53.415Z
https://discuss.pytorch.org/t/ram-keep-increasing-in-inference-solved/15599/14
[image] zahra: a = torch.tensor([[[ 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24]]]) Ow, sorry. I used the tensor in the first post. a.unfold(2, 2,2).unfold(3, 2,2).contiguous().view(2, 6, 2, 2) by the way, as I told you only need to work wi&hellip; OK, so I managed to find a workaround on RAM increasing, the solution was <a href="https://discuss.pytorch.org/t/3d-cnn-models-ensemble/15481/4">your previous suggestion to my other post</a> to do the standardization in the Dataset and return multiple inputs. I think it was because i normalized my inputs inside the wrapper/ensemble model maybe there were variables that we&hellip; These constructs are used to pass a variable amount of arguments to a class instantiation or function in Python. Have a look at <a href="https://www.geeksforgeeks.org/args-kwargs-python/">this explanation</a> for more information.
1,259
{'text': ['OK, so I managed to find a workaround on RAM increasing, the solution was <a href="https://discuss.pytorch.org/t/3d-cnn-models-ensemble/15481/4">your previous suggestion to my other post</a> to do the standardization in the Dataset and return multiple inputs. I think it was because i normalized my inputs inside the wrapper/ensemble model maybe there were variables that we&hellip;'], 'answer_start': [1259]}
How to load pytorch model
I have saved my model using the code torch.save(the_model.state_dict(), PATH) after training while loading, I am confused. can someone explain this code the_model = TheModelClass(*args, **kwargs) the_model.load_state_dict(torch.load(PATH))
0
2020-01-12T07:41:20.179Z
These constructs are used to pass a variable amount of arguments to a class instantiation or function in Python. Have a look at <a href="https://www.geeksforgeeks.org/args-kwargs-python/">this explanation</a> for more information.
0
2020-01-12T08:38:36.774Z
https://discuss.pytorch.org/t/how-to-load-pytorch-model/66432/6
[image] zahra: a = torch.tensor([[[ 1, 2, 3, 4, 5, 6], [ 7, 8, 9, 10, 11, 12], [13, 14, 15, 16, 17, 18], [19, 20, 21, 22, 23, 24]]]) Ow, sorry. I used the tensor in the first post. a.unfold(2, 2,2).unfold(3, 2,2).contiguous().view(2, 6, 2, 2) by the way, as I told you only need to work wi&hellip; OK, so I managed to find a workaround on RAM increasing, the solution was <a href="https://discuss.pytorch.org/t/3d-cnn-models-ensemble/15481/4">your previous suggestion to my other post</a> to do the standardization in the Dataset and return multiple inputs. I think it was because i normalized my inputs inside the wrapper/ensemble model maybe there were variables that we&hellip; These constructs are used to pass a variable amount of arguments to a class instantiation or function in Python. Have a look at <a href="https://www.geeksforgeeks.org/args-kwargs-python/">this explanation</a> for more information.
687
{'text': ['These constructs are used to pass a variable amount of arguments to a class instantiation or function in Python. Have a look at <a href="https://www.geeksforgeeks.org/args-kwargs-python/">this explanation</a> for more information.'], 'answer_start': [687]}
FasterRCNN Resnet50 JIT Trace
Hi, I’m trying to trace FasterRCNN to use in Pytorch Mobile on iOS. I simply trace as shown below: model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True) model.eval() input_tensor = torch.rand(1,3,224,224) script_model = torch.jit.trace(model, input_tensor) script_model.sa&hellip;
1
2019-11-27T17:46:44.588Z
<a class="mention" href="/u/hussainharis">@HussainHaris</a> David is right, the torchvision c++ APIs are not supported on mobile yet. If your local pytorch version is 1.4.0, you can use the python API below to examine the ops used by your model torch.jit.export_opnames(traced_script_module)
0
2020-01-30T03:39:58.385Z
https://discuss.pytorch.org/t/fasterrcnn-resnet50-jit-trace/62337/7
<a class="mention" href="/u/hussainharis">@HussainHaris</a> David is right, the torchvision c++ APIs are not supported on mobile yet. If your local pytorch version is 1.4.0, you can use the python API below to examine the ops used by your model torch.jit.export_opnames(traced_script_module) So basically, you concatenate a and b along the first dimension, and then take all pairings along that first dimension. If so, my previous code can be adapted to do the job. Hi, I think forward ops are but not backward. <a class="mention" href="/u/goldsborough">@goldsborough</a> should be able to give you a more decisive answer for libtorch.
1,834
{'text': ['<a class="mention" href="/u/hussainharis">@HussainHaris</a> David is right, the torchvision c++ APIs are not supported on mobile yet. If your local pytorch version is 1.4.0, you can use the python API below to examine the ops used by your model\n\ntorch.jit.export_opnames(traced_script_module)'], 'answer_start': [1834]}
Combine 2 2D-tensors into a 3D tensor
Hi everybody, I’m looking a way to do the following thing: Let’s assume we have a tensor A of dimension [N,F] and a tensor B of dimension [N,F], I would like to obtain a tensor C of dimension [N,N,2*F]. Is there a way to do this ? Thanks for your help !
0
2018-03-20T17:23:22.243Z
So basically, you concatenate a and b along the first dimension, and then take all pairings along that first dimension. If so, my previous code can be adapted to do the job.
0
2018-03-21T08:42:06.450Z
https://discuss.pytorch.org/t/combine-2-2d-tensors-into-a-3d-tensor/15227/14
<a class="mention" href="/u/hussainharis">@HussainHaris</a> David is right, the torchvision c++ APIs are not supported on mobile yet. If your local pytorch version is 1.4.0, you can use the python API below to examine the ops used by your model torch.jit.export_opnames(traced_script_module) So basically, you concatenate a and b along the first dimension, and then take all pairings along that first dimension. If so, my previous code can be adapted to do the job. Hi, I think forward ops are but not backward. <a class="mention" href="/u/goldsborough">@goldsborough</a> should be able to give you a more decisive answer for libtorch.
1,211
{'text': ['So basically, you concatenate a and b along the first dimension, and then take all pairings along that first dimension.\n\nIf so, my previous code can be adapted to do the job.'], 'answer_start': [1211]}
Is evaluating the network thread-safe?
I’m sorry if the answer to this question is obvious, but I’m not sure: If I have multiple parallel threads, and each thread has its own input tensor; will evaluating the net-&gt;forward() from each thread happen in parallel? (Btw you people have done insanely good work with LibTorch!!! Keept it up)
1
2019-02-21T01:24:48.355Z
Hi, I think forward ops are but not backward. <a class="mention" href="/u/goldsborough">@goldsborough</a> should be able to give you a more decisive answer for libtorch.
1
2019-02-21T10:17:24.936Z
https://discuss.pytorch.org/t/is-evaluating-the-network-thread-safe/37802/2
<a class="mention" href="/u/hussainharis">@HussainHaris</a> David is right, the torchvision c++ APIs are not supported on mobile yet. If your local pytorch version is 1.4.0, you can use the python API below to examine the ops used by your model torch.jit.export_opnames(traced_script_module) So basically, you concatenate a and b along the first dimension, and then take all pairings along that first dimension. If so, my previous code can be adapted to do the job. Hi, I think forward ops are but not backward. <a class="mention" href="/u/goldsborough">@goldsborough</a> should be able to give you a more decisive answer for libtorch.
469
{'text': ['Hi,\n\nI think forward ops are but not backward.\n\n<a class="mention" href="/u/goldsborough">@goldsborough</a> should be able to give you a more decisive answer for libtorch.'], 'answer_start': [469]}
Passing 'model.parameters() + other_parms' to optimizer
I am using nn.AdaptiveLogSoftmaxWithLoss. the way I am building my model, the loss is outside of my nn.Module. How can I pass the weights included in this loss for them to appear in my model.parameters() and model.modules()? Or at least, how can I join both the parameters/modules of my model with t&hellip;
0
2019-05-14T15:06:41.503Z
Hi, model.parameters() and model.modules() are both generator, firstly you could get the list of parameters and modules by list(model.parameters()) and then passing the weights and the loss module in a append to list method. But model.modules() get submodules in a iteration way, so there will be s&hellip;
1
2019-05-15T01:51:50.082Z
https://discuss.pytorch.org/t/passing-model-parameters-other-parms-to-optimizer/45218/2
Hi, model.parameters() and model.modules() are both generator, firstly you could get the list of parameters and modules by list(model.parameters()) and then passing the weights and the loss module in a append to list method. But model.modules() get submodules in a iteration way, so there will be s&hellip; I had actually missed the nvidia-smi output: It shows that 2779MiB / 6144MiB (46%) is being utilzed. This implies that the model was successfully loaded into the GPU. One empirical way to verify this is to time it using device = &#39;cpu&#39; and then time it using device = &#39;cuda&#39; and verify the different &hellip; If you have parameters with requires_grad = False backward() does not compute a gradient on those. So you could try something like: def train(): for batch_idx, (data, target) in enumerate(train_loader): optimizer.zero_grad() output = model(data) loss = criterion(output&hellip;
1,280
{'text': ['Hi,\n\nmodel.parameters() and model.modules() are both generator, firstly you could get the list of parameters and modules by list(model.parameters()) and then passing the weights and the loss module in a append to list method.\n\nBut model.modules() get submodules in a iteration way, so there will be s&hellip;'], 'answer_start': [1280]}
It seems Pytorch doesn't use GPU
First, i apologize for my poor English. Recently, I bought RTX2060 for deep learning. I installed pytorch-gpu with conda by conda install pytorch torchvision cudatoolkit=10.1 -c pytorch. Of course, I setup NVIDIA Driver too. But when i ran my pytorch code, it was so slow to train. So i checked tas&hellip;
0
2020-03-29T06:38:36.825Z
I had actually missed the nvidia-smi output: It shows that 2779MiB / 6144MiB (46%) is being utilzed. This implies that the model was successfully loaded into the GPU. One empirical way to verify this is to time it using device = &#39;cpu&#39; and then time it using device = &#39;cuda&#39; and verify the different &hellip;
0
2020-03-29T07:16:18.798Z
https://discuss.pytorch.org/t/it-seems-pytorch-doesnt-use-gpu/74673/4
Hi, model.parameters() and model.modules() are both generator, firstly you could get the list of parameters and modules by list(model.parameters()) and then passing the weights and the loss module in a append to list method. But model.modules() get submodules in a iteration way, so there will be s&hellip; I had actually missed the nvidia-smi output: It shows that 2779MiB / 6144MiB (46%) is being utilzed. This implies that the model was successfully loaded into the GPU. One empirical way to verify this is to time it using device = &#39;cpu&#39; and then time it using device = &#39;cuda&#39; and verify the different &hellip; If you have parameters with requires_grad = False backward() does not compute a gradient on those. So you could try something like: def train(): for batch_idx, (data, target) in enumerate(train_loader): optimizer.zero_grad() output = model(data) loss = criterion(output&hellip;
949
{'text': ['I had actually missed the nvidia-smi output:\n\nIt shows that 2779MiB / 6144MiB (46%) is being utilzed. This implies that the model was successfully loaded into the GPU. One empirical way to verify this is to time it using device = &#39;cpu&#39; and then time it using device = &#39;cuda&#39; and verify the different &hellip;'], 'answer_start': [949]}
How to backward only a subset of neural network parameters? (avoid retain_graph=True)
Hey; At the beginning of the training, I have created a neural network NN. I create optimizer by optimizer = optim.Adam(NN.parameters(), lr=1e-3) During the training, I’m adding to new layers to this network. (Imagining dynamically increasing number of layers of residual network). optimizer.add&hellip;
1
2019-04-17T04:35:33.458Z
If you have parameters with requires_grad = False backward() does not compute a gradient on those. So you could try something like: def train(): for batch_idx, (data, target) in enumerate(train_loader): optimizer.zero_grad() output = model(data) loss = criterion(output&hellip;
3
2019-04-17T20:16:42.078Z
https://discuss.pytorch.org/t/how-to-backward-only-a-subset-of-neural-network-parameters-avoid-retain-graph-true/42799/8
Hi, model.parameters() and model.modules() are both generator, firstly you could get the list of parameters and modules by list(model.parameters()) and then passing the weights and the loss module in a append to list method. But model.modules() get submodules in a iteration way, so there will be s&hellip; I had actually missed the nvidia-smi output: It shows that 2779MiB / 6144MiB (46%) is being utilzed. This implies that the model was successfully loaded into the GPU. One empirical way to verify this is to time it using device = &#39;cpu&#39; and then time it using device = &#39;cuda&#39; and verify the different &hellip; If you have parameters with requires_grad = False backward() does not compute a gradient on those. So you could try something like: def train(): for batch_idx, (data, target) in enumerate(train_loader): optimizer.zero_grad() output = model(data) loss = criterion(output&hellip;
634
{'text': ['If you have parameters with requires_grad = False backward() does not compute a gradient on those.\n\nSo you could try something like:\n\ndef train():\n\nfor batch_idx, (data, target) in enumerate(train_loader):\n\noptimizer.zero_grad()\n\noutput = model(data)\n\nloss = criterion(output&hellip;'], 'answer_start': [634]}
untimeError: The expanded size of the tensor (32) must match the existing size (8) at non-singleton dimension 1
I have got this error while running my code, the tensor shape is (BatchXchannelsXHightXWidth) [8,204,15,15], <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/e/ebc513420c5961fd24c9585a052ab8f5a0b3846c.png" data-download-href="https://discuss.pytorch.org/uploads/default/ebc513420c5961fd24c9585a052ab8f5a0b3846c" title="erreeer.PNG">[erreeer]</a> However, it worked perfectly for the same image with different height and width [8,204,11,11]. Thanks in advance
0
2019-05-11T06:49:24.447Z
Add a print statement in your forward method so see the shape of x: def forward(self, x): x = self.model(x) x = self.ap(x) x = x.view(x.size(0), -1) print(x.shape) x = self.fc1(x) x = self.relu(x) x = self.dropout(x) x = self.fc2(x&hellip;
2
2019-05-14T12:21:44.810Z
https://discuss.pytorch.org/t/untimeerror-the-expanded-size-of-the-tensor-32-must-match-the-existing-size-8-at-non-singleton-dimension-1/44960/8
Add a print statement in your forward method so see the shape of x: def forward(self, x): x = self.model(x) x = self.ap(x) x = x.view(x.size(0), -1) print(x.shape) x = self.fc1(x) x = self.relu(x) x = self.dropout(x) x = self.fc2(x&hellip; then you may use a for-loop to divide X_prime into smaller chunks, just as what your old code was doing, just don’t split them too fine, like into row-by-row operations. Time for space, or space for time. Yep, because BatchNorm would trigger DDP comm in forward as well. In that case, need to move the signal checking before forward, but it will be slower. The following code should work. import torch import torch.distributed as dist import torch.multiprocessing as mp import torch.nn as nn import torch&hellip;
1,834
{'text': ['Add a print statement in your forward method so see the shape of x:\n\ndef forward(self, x):\n\nx = self.model(x)\n\nx = self.ap(x)\n\nx = x.view(x.size(0), -1)\n\nprint(x.shape)\n\nx = self.fc1(x)\n\nx = self.relu(x)\n\nx = self.dropout(x)\n\nx = self.fc2(x&hellip;'], 'answer_start': [1834]}
`Exception: process 0 terminated with exit code 1` error when using `torch.multiprocessing.spawn` to parallelize over multiple GPUs
I have the following code below using torch.multiprocessing.spawn to parallelize over multiple GPUs: import numpy as np import torch from torch.multiprocessing import Pool, set_start_method, spawn X = np.array([[1, 3, 2, 3], [2, 3, 5, 6], [1, 2, 3, 4]]) X = torch.DoubleTensor(X) def X_power_func(&hellip;
0
2020-07-27T05:53:36.397Z
then you may use a for-loop to divide X_prime into smaller chunks, just as what your old code was doing, just don’t split them too fine, like into row-by-row operations. Time for space, or space for time.
1
2020-07-29T16:49:42.549Z
https://discuss.pytorch.org/t/exception-process-0-terminated-with-exit-code-1-error-when-using-torch-multiprocessing-spawn-to-parallelize-over-multiple-gpus/90636/21
Add a print statement in your forward method so see the shape of x: def forward(self, x): x = self.model(x) x = self.ap(x) x = x.view(x.size(0), -1) print(x.shape) x = self.fc1(x) x = self.relu(x) x = self.dropout(x) x = self.fc2(x&hellip; then you may use a for-loop to divide X_prime into smaller chunks, just as what your old code was doing, just don’t split them too fine, like into row-by-row operations. Time for space, or space for time. Yep, because BatchNorm would trigger DDP comm in forward as well. In that case, need to move the signal checking before forward, but it will be slower. The following code should work. import torch import torch.distributed as dist import torch.multiprocessing as mp import torch.nn as nn import torch&hellip;
1,166
{'text': ['then you may use a for-loop to divide X_prime into smaller chunks, just as what your old code was doing, just don’t split them too fine, like into row-by-row operations.\n\nTime for space, or space for time.'], 'answer_start': [1166]}
Multiprocessing - Barrier Blocks all Processes?
I am trying to use dist.barrier() to sync my all my processes so that they can finish one epoch together. But as soon as the first process hits the barrier, it stops all the other processes! Why is this?
1
2020-05-08T15:41:07.593Z
Yep, because BatchNorm would trigger DDP comm in forward as well. In that case, need to move the signal checking before forward, but it will be slower. The following code should work. import torch import torch.distributed as dist import torch.multiprocessing as mp import torch.nn as nn import torch&hellip;
1
2020-05-26T20:07:10.528Z
https://discuss.pytorch.org/t/multiprocessing-barrier-blocks-all-processes/80345/27
Add a print statement in your forward method so see the shape of x: def forward(self, x): x = self.model(x) x = self.ap(x) x = x.view(x.size(0), -1) print(x.shape) x = self.fc1(x) x = self.relu(x) x = self.dropout(x) x = self.fc2(x&hellip; then you may use a for-loop to divide X_prime into smaller chunks, just as what your old code was doing, just don’t split them too fine, like into row-by-row operations. Time for space, or space for time. Yep, because BatchNorm would trigger DDP comm in forward as well. In that case, need to move the signal checking before forward, but it will be slower. The following code should work. import torch import torch.distributed as dist import torch.multiprocessing as mp import torch.nn as nn import torch&hellip;
455
{'text': ['Yep, because BatchNorm would trigger DDP comm in forward as well. In that case, need to move the signal checking before forward, but it will be slower. The following code should work.\n\nimport torch\n\nimport torch.distributed as dist\n\nimport torch.multiprocessing as mp\n\nimport torch.nn as nn\n\nimport torch&hellip;'], 'answer_start': [455]}
Is the SGD in Pytorch a real SGD?
After looking up the code of Pytorch’s SGD: <a href="https://github.com/pytorch/pytorch/blob/cd9b27231b51633e76e28b6a34002ab83b0660fc/torch/optim/sgd.py#L63" rel="nofollow noopener">Pytorch’s SGD</a> it seems (excuse me in advance if my assertion is wrong) that the SGD is not a real SGD. Indeed, the way gradients are accumulated and more especially the order wherein they are accumulated is up to the user. Thereupon where is the randomne&hellip;
1
2017-11-09T12:31:53.698Z
Ok perfect, that was exactly what I thought. Actually, they should be named “Stepper”. For example with SGD that will be “SGDStepper”. That seems more clear.
1
2017-11-09T16:48:20.841Z
https://discuss.pytorch.org/t/is-the-sgd-in-pytorch-a-real-sgd/9714/7
Ok perfect, that was exactly what I thought. Actually, they should be named “Stepper”. For example with SGD that will be “SGDStepper”. That seems more clear. You could build PyTorch from source. The installation is described <a href="https://github.com/pytorch/pytorch#from-source">here</a>. Let me know, if you encounter any problems. It is not loaded in this line. RandomSampler class is just a tool for the Dataloader class. As I said before, if you have a look to the Dataloader class, you will find this : if batch_sampler is None: if sampler is None: if shuffle: sampler = RandomSampler(data&hellip;
1,534
{'text': ['Ok perfect, that was exactly what I thought. Actually, they should be named “Stepper”. For example with SGD that will be “SGDStepper”. That seems more clear.'], 'answer_start': [1534]}
PyTorch version for cuda compute capability 3.0 (GTX 780M)
I installed PyTorch on my laptop only to find out that my GPU is not supported. Is there a version that I can use that still supports my GPU ? Or are there any other solutions ? /home/haziq/anaconda3/envs/dl/lib/python3.5/site-packages/torch/cuda/init.py:97: UserWarning: Found GPU0 GeForce GTX 78&hellip;
0
2018-04-03T15:31:29.967Z
You could build PyTorch from source. The installation is described <a href="https://github.com/pytorch/pytorch#from-source">here</a>. Let me know, if you encounter any problems.
1
2018-04-03T15:32:55.641Z
https://discuss.pytorch.org/t/pytorch-version-for-cuda-compute-capability-3-0-gtx-780m/15889/2
Ok perfect, that was exactly what I thought. Actually, they should be named “Stepper”. For example with SGD that will be “SGDStepper”. That seems more clear. You could build PyTorch from source. The installation is described <a href="https://github.com/pytorch/pytorch#from-source">here</a>. Let me know, if you encounter any problems. It is not loaded in this line. RandomSampler class is just a tool for the Dataloader class. As I said before, if you have a look to the Dataloader class, you will find this : if batch_sampler is None: if sampler is None: if shuffle: sampler = RandomSampler(data&hellip;
925
{'text': ['You could build PyTorch from source.\n\nThe installation is described <a href="https://github.com/pytorch/pytorch#from-source">here</a>.\n\nLet me know, if you encounter any problems.'], 'answer_start': [925]}
Dataloader iterable
Dear PyTorch community, I am working on an optimization algorithm. This algorithm needs to take a random data in the dataloader at each iteration, so I do not have many epoch, but I have a max iteration variable (30000 for example). However, to implement it by the easiest way, I would have access t&hellip;
1
2017-11-02T21:30:26.749Z
It is not loaded in this line. RandomSampler class is just a tool for the Dataloader class. As I said before, if you have a look to the Dataloader class, you will find this : if batch_sampler is None: if sampler is None: if shuffle: sampler = RandomSampler(data&hellip;
0
2017-11-20T06:47:56.389Z
https://discuss.pytorch.org/t/dataloader-iterable/9437/13
Ok perfect, that was exactly what I thought. Actually, they should be named “Stepper”. For example with SGD that will be “SGDStepper”. That seems more clear. You could build PyTorch from source. The installation is described <a href="https://github.com/pytorch/pytorch#from-source">here</a>. Let me know, if you encounter any problems. It is not loaded in this line. RandomSampler class is just a tool for the Dataloader class. As I said before, if you have a look to the Dataloader class, you will find this : if batch_sampler is None: if sampler is None: if shuffle: sampler = RandomSampler(data&hellip;
338
{'text': ['It is not loaded in this line. RandomSampler class is just a tool for the Dataloader class. As I said before, if you have a look to the Dataloader class, you will find this :\n\nif batch_sampler is None:\n\nif sampler is None:\n\nif shuffle:\n\nsampler = RandomSampler(data&hellip;'], 'answer_start': [338]}
Read DICOM files in Pytorch
I am looking for how can I read a b8 file in Pytorch. Any comment will be appreciated. My images are cardiac ultrasound images which do have frames and converted from DICOM to b8 data.
1
2018-11-15T12:06:15.271Z
Just for the sake of debugging, could you copy the file into your current working directory, where your python script is located, and try: path = &#39;./ImgUS.dcm&#39; pydicom.read_file(path)
1
2018-11-15T16:54:39.579Z
https://discuss.pytorch.org/t/read-dicom-files-in-pytorch/29666/12
Just for the sake of debugging, could you copy the file into your current working directory, where your python script is located, and try: path = &#39;./ImgUS.dcm&#39; pydicom.read_file(path) <a class="mention" href="/u/lausanne">@Lausanne</a> I think you should keep the original learning rate. If you use the DistributedDataParallel, the gradient will be averaged between each process. DataParallel sum the gradient. It is equal to the DistributedDataParallel. The reason is that the the loss will be averaged by 128 batchsize and &hellip; This error might be thrown, if pickle ends unexpectedly, e.g. if the downloaded file is corrupt. Did you try to rerun the code? If you still see this error, could you please delete the cached files? They should be located in /home/USER/.cache/torch/checkpoints by default.
1,222
{'text': ['Just for the sake of debugging, could you copy the file into your current working directory, where your python script is located, and try:\n\npath = &#39;./ImgUS.dcm&#39;\n\npydicom.read_file(path)'], 'answer_start': [1222]}
Is average the correct way for the gradient in DistributedDataParallel with multi nodes?
When I use DataParallel in one machine with two GPUs with 8 batch size(4 on each GPU), I get a satisfied training result. But, if I use DistributedDataParallel on two single GPU machines with 8 batch size(4 on each node), the training result is dissatisfied and convergence speed is slower than the D&hellip;
0
2019-01-09T13:37:58.948Z
<a class="mention" href="/u/lausanne">@Lausanne</a> I think you should keep the original learning rate. If you use the DistributedDataParallel, the gradient will be averaged between each process. DataParallel sum the gradient. It is equal to the DistributedDataParallel. The reason is that the the loss will be averaged by 128 batchsize and &hellip;
0
2019-01-16T13:40:46.209Z
https://discuss.pytorch.org/t/is-average-the-correct-way-for-the-gradient-in-distributeddataparallel-with-multi-nodes/34260/11
Just for the sake of debugging, could you copy the file into your current working directory, where your python script is located, and try: path = &#39;./ImgUS.dcm&#39; pydicom.read_file(path) <a class="mention" href="/u/lausanne">@Lausanne</a> I think you should keep the original learning rate. If you use the DistributedDataParallel, the gradient will be averaged between each process. DataParallel sum the gradient. It is equal to the DistributedDataParallel. The reason is that the the loss will be averaged by 128 batchsize and &hellip; This error might be thrown, if pickle ends unexpectedly, e.g. if the downloaded file is corrupt. Did you try to rerun the code? If you still see this error, could you please delete the cached files? They should be located in /home/USER/.cache/torch/checkpoints by default.
805
{'text': ['<a class="mention" href="/u/lausanne">@Lausanne</a> I think you should keep the original learning rate.\n\nIf you use the DistributedDataParallel, the gradient will be averaged between each process. DataParallel sum the gradient. It is equal to the DistributedDataParallel. The reason is that the the loss will be averaged by 128 batchsize and &hellip;'], 'answer_start': [805]}
Unpickling stack underflow
Hi, I’m meeting a problem loading the pre-trained model of Resnet-50. I just simply load the model and meet the following problem. I can’t find a solution to solve it. import torchvision res= torchvision.models.resnet50(pretrained=True) Traceback (most recent call last): File “”, line 1, in &hellip;
0
2020-01-31T02:49:30.641Z
This error might be thrown, if pickle ends unexpectedly, e.g. if the downloaded file is corrupt. Did you try to rerun the code? If you still see this error, could you please delete the cached files? They should be located in /home/USER/.cache/torch/checkpoints by default.
1
2020-01-31T04:42:00.496Z
https://discuss.pytorch.org/t/unpickling-stack-underflow/68181/2
Just for the sake of debugging, could you copy the file into your current working directory, where your python script is located, and try: path = &#39;./ImgUS.dcm&#39; pydicom.read_file(path) <a class="mention" href="/u/lausanne">@Lausanne</a> I think you should keep the original learning rate. If you use the DistributedDataParallel, the gradient will be averaged between each process. DataParallel sum the gradient. It is equal to the DistributedDataParallel. The reason is that the the loss will be averaged by 128 batchsize and &hellip; This error might be thrown, if pickle ends unexpectedly, e.g. if the downloaded file is corrupt. Did you try to rerun the code? If you still see this error, could you please delete the cached files? They should be located in /home/USER/.cache/torch/checkpoints by default.
545
{'text': ['This error might be thrown, if pickle ends unexpectedly, e.g. if the downloaded file is corrupt.\n\nDid you try to rerun the code?\n\nIf you still see this error, could you please delete the cached files?\n\nThey should be located in\n\n/home/USER/.cache/torch/checkpoints\n\nby default.'], 'answer_start': [545]}
Weighted Binary Cross Entropy
Hi, i was looking for a Weighted BCE Loss function in pytorch but couldnt find one, if such a function exists i would appriciate it if someone could provide its name.
0
2019-07-20T13:36:41.276Z
<a href="https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss" rel="nofollow noopener">nn.BCEWithLogitsLoss</a> takes a weight and pos_weight argument. From the docs: weight (<a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" rel="nofollow noopener"> Tensor </a> , optional ) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch. pos_weight (<a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" rel="nofollow noopener"> Tensor </a> , optional ) – a weight of positive examples. Must be a&hellip;
4
2019-07-20T13:48:19.218Z
https://discuss.pytorch.org/t/weighted-binary-cross-entropy/51156/2
<a href="https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss" rel="nofollow noopener">nn.BCEWithLogitsLoss</a> takes a weight and pos_weight argument. From the docs: weight (<a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" rel="nofollow noopener"> Tensor </a> , optional ) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch. pos_weight (<a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" rel="nofollow noopener"> Tensor </a> , optional ) – a weight of positive examples. Must be a&hellip; Use nn.ModuleList. My bad, zz needs to be a leaf node in the computation graph. Try the following: zz = Variable(z.data.expand(5, 1), requires_grad=True) L=(x*w*zz)**2 L.sum().backward() zz.grad
1,644
{'text': ['<a href="https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss" rel="nofollow noopener">nn.BCEWithLogitsLoss</a> takes a weight and pos_weight argument.\n\nFrom the docs:\n\nweight (<a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" rel="nofollow noopener"> Tensor </a> , optional ) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch.\n\npos_weight (<a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" rel="nofollow noopener"> Tensor </a> , optional ) – a weight of positive examples. Must be a&hellip;'], 'answer_start': [1644]}
Runtime Error: tensors are on different GPUs
Hi, I have encountered this problem. Traceback (most recent call last): File “<a href="http://main.py" rel="nofollow noopener">main.py</a>”, line 67, in network.train() File “/home/sp/text-classification-cnn/network/cnnTextNetwork.py”, line 115, in train logit = self.model(feature) File “/usr/local/lib/python2.7/dist-packages/torch/nn/modules/&hellip;
1
2017-04-21T09:42:35.897Z
Use nn.ModuleList.
5
2017-04-21T20:19:33.135Z
https://discuss.pytorch.org/t/runtime-error-tensors-are-on-different-gpus/2100/8
<a href="https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss" rel="nofollow noopener">nn.BCEWithLogitsLoss</a> takes a weight and pos_weight argument. From the docs: weight (<a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" rel="nofollow noopener"> Tensor </a> , optional ) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch. pos_weight (<a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" rel="nofollow noopener"> Tensor </a> , optional ) – a weight of positive examples. Must be a&hellip; Use nn.ModuleList. My bad, zz needs to be a leaf node in the computation graph. Try the following: zz = Variable(z.data.expand(5, 1), requires_grad=True) L=(x*w*zz)**2 L.sum().backward() zz.grad
1,427
{'text': ['Use nn.ModuleList.'], 'answer_start': [1427]}
Is there anyway to calculate Gauss-Hessian matrix?
Hi all, Could you please let me know if there is anyway to calculate the Gauss-Hessian matrix ? Gaussian Newton is a quasi-Newton method which is defined <a href="https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm" rel="nofollow noopener">here</a>. It does not calculate direct the Hessian but approximate the Hessian buy broastcast product of two gradients as following function.[0M7KV] &hellip;
2
2017-11-16T02:39:12.674Z
My bad, zz needs to be a leaf node in the computation graph. Try the following: zz = Variable(z.data.expand(5, 1), requires_grad=True) L=(x*w*zz)**2 L.sum().backward() zz.grad
1
2017-11-16T19:50:49.104Z
https://discuss.pytorch.org/t/is-there-anyway-to-calculate-gauss-hessian-matrix/10016/8
<a href="https://pytorch.org/docs/stable/nn.html#torch.nn.BCEWithLogitsLoss" rel="nofollow noopener">nn.BCEWithLogitsLoss</a> takes a weight and pos_weight argument. From the docs: weight (<a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" rel="nofollow noopener"> Tensor </a> , optional ) – a manual rescaling weight given to the loss of each batch element. If given, has to be a Tensor of size nbatch. pos_weight (<a href="https://pytorch.org/docs/stable/tensors.html#torch.Tensor" rel="nofollow noopener"> Tensor </a> , optional ) – a weight of positive examples. Must be a&hellip; Use nn.ModuleList. My bad, zz needs to be a leaf node in the computation graph. Try the following: zz = Variable(z.data.expand(5, 1), requires_grad=True) L=(x*w*zz)**2 L.sum().backward() zz.grad
624
{'text': ['My bad, zz needs to be a leaf node in the computation graph. Try the following:\n\nzz = Variable(z.data.expand(5, 1), requires_grad=True)\n\nL=(x*w*zz)**2\n\nL.sum().backward()\n\nzz.grad'], 'answer_start': [624]}
Torch.eig() seems really unstable, can anyone explain this result?
Hi, Recently I have migrated a simple parallel orthogonalization function from numpy to pytorch. However, the torch.eig() function seems very unstable (or the precision is very low). The core of the code is the following: # Step1: Initialization: P_cuda = torch.randn([300,40000]).type(dtype) # S&hellip;
0
2017-12-27T02:37:49.626Z
I think your code might be assuming that the eigenvectors are orthogonal to each other, which would explain why torch.symeig works but torch.eig doesn’t work. The docs don’t give any guarantees on the results of torch.eig so this might not be a bug. In the backend, torch.eig binds to Lapack (for CP&hellip;
0
2018-01-04T20:26:22.847Z
https://discuss.pytorch.org/t/torch-eig-seems-really-unstable-can-anyone-explain-this-result/11579/14
I think your code might be assuming that the eigenvectors are orthogonal to each other, which would explain why torch.symeig works but torch.eig doesn’t work. The docs don’t give any guarantees on the results of torch.eig so this might not be a bug. In the backend, torch.eig binds to Lapack (for CP&hellip; Hi, First of all, PyCharm or most of IDEs cannot really analysis libraries like PyTorch which has C++ backend and Python frontend so it is normal to get warning or missing errors but your codes works fine. But about your question: When you are on GPU, torch.Tensor() will convert your data type to&hellip; The nn.Embedding layer would get an input tensor, and you should check its min and max value, e.g. as: print(x.min(), x.max()) out = self.embedding(x) # raises the index error This might give you the indices, which are out of bounds. However, the error description is a bit confusing, as you mentio&hellip;
1,606
{'text': ['I think your code might be assuming that the eigenvectors are orthogonal to each other, which would explain why torch.symeig works but torch.eig doesn’t work.\n\nThe docs don’t give any guarantees on the results of torch.eig so this might not be a bug. In the backend, torch.eig binds to Lapack (for CP&hellip;'], 'answer_start': [1606]}
Best way to convert a list to a tensor?
let a=[1,2,3], then i let b=torch.Tensor(a) , my pycharm’s background become yellow like that [%E6%97%A0%E6%A0%87%E9%A2%98] is there exist a elegent way to convert a list to a tensor? or is my ide’s fault?
9
2019-11-04T12:29:24.169Z
Hi, First of all, PyCharm or most of IDEs cannot really analysis libraries like PyTorch which has C++ backend and Python frontend so it is normal to get warning or missing errors but your codes works fine. But about your question: When you are on GPU, torch.Tensor() will convert your data type to&hellip;
21
2019-11-04T14:48:52.854Z
https://discuss.pytorch.org/t/best-way-to-convert-a-list-to-a-tensor/59949/3
I think your code might be assuming that the eigenvectors are orthogonal to each other, which would explain why torch.symeig works but torch.eig doesn’t work. The docs don’t give any guarantees on the results of torch.eig so this might not be a bug. In the backend, torch.eig binds to Lapack (for CP&hellip; Hi, First of all, PyCharm or most of IDEs cannot really analysis libraries like PyTorch which has C++ backend and Python frontend so it is normal to get warning or missing errors but your codes works fine. But about your question: When you are on GPU, torch.Tensor() will convert your data type to&hellip; The nn.Embedding layer would get an input tensor, and you should check its min and max value, e.g. as: print(x.min(), x.max()) out = self.embedding(x) # raises the index error This might give you the indices, which are out of bounds. However, the error description is a bit confusing, as you mentio&hellip;
1,112
{'text': ['Hi,\n\nFirst of all, PyCharm or most of IDEs cannot really analysis libraries like PyTorch which has C++ backend and Python frontend so it is normal to get warning or missing errors but your codes works fine.\n\nBut about your question:\n\nWhen you are on GPU, torch.Tensor() will convert your data type to&hellip;'], 'answer_start': [1112]}
'Device Side assert triggered at....' Error
Hello, I’m trying to run a CNN on a gpu through the command prompt (instead of jupyter). I keep getting the following error: C:/w/1/s/tmp_conda_3.7_100118/conda/conda-bld/pytorch_1579082551706/work/aten/src/THC/THCTensorIndex.cu:307: block: [0,0,0], thread: [0,0,0] Assertion `srcIndex &lt; srcSelect&hellip;
0
2020-05-22T13:43:14.570Z
The nn.Embedding layer would get an input tensor, and you should check its min and max value, e.g. as: print(x.min(), x.max()) out = self.embedding(x) # raises the index error This might give you the indices, which are out of bounds. However, the error description is a bit confusing, as you mentio&hellip;
0
2020-06-05T00:59:59.147Z
https://discuss.pytorch.org/t/device-side-assert-triggered-at-error/82488/26
I think your code might be assuming that the eigenvectors are orthogonal to each other, which would explain why torch.symeig works but torch.eig doesn’t work. The docs don’t give any guarantees on the results of torch.eig so this might not be a bug. In the backend, torch.eig binds to Lapack (for CP&hellip; Hi, First of all, PyCharm or most of IDEs cannot really analysis libraries like PyTorch which has C++ backend and Python frontend so it is normal to get warning or missing errors but your codes works fine. But about your question: When you are on GPU, torch.Tensor() will convert your data type to&hellip; The nn.Embedding layer would get an input tensor, and you should check its min and max value, e.g. as: print(x.min(), x.max()) out = self.embedding(x) # raises the index error This might give you the indices, which are out of bounds. However, the error description is a bit confusing, as you mentio&hellip;
618
{'text': ['The nn.Embedding layer would get an input tensor, and you should check its min and max value, e.g. as:\n\nprint(x.min(), x.max())\n\nout = self.embedding(x) # raises the index error\n\nThis might give you the indices, which are out of bounds. However, the error description is a bit confusing, as you mentio&hellip;'], 'answer_start': [618]}
K-means Loss Calculation
Can someone give an idea on how to implement k-means clustering loss in pytorch? [39] Also I am using Pytorch nn.mse() loss. Is there a way to add L2 reguarization to this term. In short, if I want to use L2-Reg. loss.
1
2018-07-30T23:30:23.021Z
If you use triple backticks (```python) before and just the backtics (```) after your code, it will be well-formatted. In Jupyter: import torch class KMeansClusteringLoss(torch.nn.Module): def __init__(self): super(KMeansClusteringLoss,self).__init__() def forward(self, encode_ou&hellip;
1
2018-08-01T10:02:46.251Z
https://discuss.pytorch.org/t/k-means-loss-calculation/22041/7
If you use triple backticks (```python) before and just the backtics (```) after your code, it will be well-formatted. In Jupyter: import torch class KMeansClusteringLoss(torch.nn.Module): def __init__(self): super(KMeansClusteringLoss,self).__init__() def forward(self, encode_ou&hellip; Yes, that’s why I asked about Windows. :wink: The driver should be new enough for Linux. It seems the minimal compute capability is now 3.7 based on <a href="https://github.com/pytorch/builder/commit/2aac90bd723dfbb3dc7728152bf0e6877ec4da16#diff-65bdb50b3eee4c380f3b65973141e454" rel="nofollow noopener">this commit</a> for the binaries, so you might need to build from source. The values in the model parameters won’t be changed, if you assign a new tensor to the key in the state_dict. You could either load the manipulated state_dict afterwards or change the parameter’s value inplace as shown here: model = nn.Linear(1, 1) print(model.weight) &gt; Parameter containing: tenso&hellip;
1,854
{'text': ['If you use triple backticks (```python) before and just the backtics (```) after your code, it will be well-formatted.\n\nIn Jupyter:\n\nimport torch\n\nclass KMeansClusteringLoss(torch.nn.Module):\n\ndef __init__(self):\n\nsuper(KMeansClusteringLoss,self).__init__()\n\ndef forward(self, encode_ou&hellip;'], 'answer_start': [1854]}
Minimum CUDA compute compatibility for PyTorch 1.3
I am using K40c GPUs with CUDA compute compatibility 3.5. I installed PyTorch via conda install pytorch torchvision cudatoolkit=10.1 -c pytorch However, when I run the following program: import torch print(torch.cuda.is_available()) print(torch.version.cuda) x = torch.tensor(1.0).cuda() y = to&hellip;
0
2019-11-12T22:54:06.540Z
Yes, that’s why I asked about Windows. :wink: The driver should be new enough for Linux. It seems the minimal compute capability is now 3.7 based on <a href="https://github.com/pytorch/builder/commit/2aac90bd723dfbb3dc7728152bf0e6877ec4da16#diff-65bdb50b3eee4c380f3b65973141e454" rel="nofollow noopener">this commit</a> for the binaries, so you might need to build from source.
1
2019-11-12T23:42:56.386Z
https://discuss.pytorch.org/t/minimum-cuda-compute-compatibility-for-pytorch-1-3/60794/5
If you use triple backticks (```python) before and just the backtics (```) after your code, it will be well-formatted. In Jupyter: import torch class KMeansClusteringLoss(torch.nn.Module): def __init__(self): super(KMeansClusteringLoss,self).__init__() def forward(self, encode_ou&hellip; Yes, that’s why I asked about Windows. :wink: The driver should be new enough for Linux. It seems the minimal compute capability is now 3.7 based on <a href="https://github.com/pytorch/builder/commit/2aac90bd723dfbb3dc7728152bf0e6877ec4da16#diff-65bdb50b3eee4c380f3b65973141e454" rel="nofollow noopener">this commit</a> for the binaries, so you might need to build from source. The values in the model parameters won’t be changed, if you assign a new tensor to the key in the state_dict. You could either load the manipulated state_dict afterwards or change the parameter’s value inplace as shown here: model = nn.Linear(1, 1) print(model.weight) &gt; Parameter containing: tenso&hellip;
1,222
{'text': ['Yes, that’s why I asked about Windows. :wink:\n\nThe driver should be new enough for Linux.\n\nIt seems the minimal compute capability is now 3.7 based on <a href="https://github.com/pytorch/builder/commit/2aac90bd723dfbb3dc7728152bf0e6877ec4da16#diff-65bdb50b3eee4c380f3b65973141e454" rel="nofollow noopener">this commit</a> for the binaries, so you might need to build from source.'], 'answer_start': [1222]}
Changing state dict value is not changing model
I am trying to change the value in my model’s state dict, but even after updating the state dict, the value does not change, any help would be appreciated. sd = model.state_dict() sd[&#39;encoder.layer.11.output.LayerNorm._running_mean&#39;] = layer_norm_stats[&#39;encoder.layer.11.output.LayerNorm._running_m&hellip;
1
2020-07-10T13:55:08.692Z
The values in the model parameters won’t be changed, if you assign a new tensor to the key in the state_dict. You could either load the manipulated state_dict afterwards or change the parameter’s value inplace as shown here: model = nn.Linear(1, 1) print(model.weight) &gt; Parameter containing: tenso&hellip;
4
2020-07-12T09:08:04.973Z
https://discuss.pytorch.org/t/changing-state-dict-value-is-not-changing-model/88695/4
If you use triple backticks (```python) before and just the backtics (```) after your code, it will be well-formatted. In Jupyter: import torch class KMeansClusteringLoss(torch.nn.Module): def __init__(self): super(KMeansClusteringLoss,self).__init__() def forward(self, encode_ou&hellip; Yes, that’s why I asked about Windows. :wink: The driver should be new enough for Linux. It seems the minimal compute capability is now 3.7 based on <a href="https://github.com/pytorch/builder/commit/2aac90bd723dfbb3dc7728152bf0e6877ec4da16#diff-65bdb50b3eee4c380f3b65973141e454" rel="nofollow noopener">this commit</a> for the binaries, so you might need to build from source. The values in the model parameters won’t be changed, if you assign a new tensor to the key in the state_dict. You could either load the manipulated state_dict afterwards or change the parameter’s value inplace as shown here: model = nn.Linear(1, 1) print(model.weight) &gt; Parameter containing: tenso&hellip;
675
{'text': ['The values in the model parameters won’t be changed, if you assign a new tensor to the key in the state_dict.\n\nYou could either load the manipulated state_dict afterwards or change the parameter’s value inplace as shown here:\n\nmodel = nn.Linear(1, 1)\n\nprint(model.weight)\n\n&gt; Parameter containing:\n\ntenso&hellip;'], 'answer_start': [675]}
LSTM training loss does not decrease
Hello, I have implemented a one layer LSTM network followed by a linear layer. I followed a few blog posts and PyTorch portal to implement variable length input sequencing with pack_padded and pad_packed sequence which appears to work well. However, the training loss does not decrease over time. T&hellip;
1
2019-10-07T17:17:03.006Z
Thank you Olivier for looking into it. Your hunch on the learning rate was in right direction. However, the problem was rather simple. I am not sure anyone can run into this. It may be very basic about pytorch. That being said, at the risk of sounding stupid, here’s the problem. overall_loss += los&hellip;
1
2019-10-09T23:03:28.993Z
https://discuss.pytorch.org/t/lstm-training-loss-does-not-decrease/57641/9
Thank you Olivier for looking into it. Your hunch on the learning rate was in right direction. However, the problem was rather simple. I am not sure anyone can run into this. It may be very basic about pytorch. That being said, at the risk of sounding stupid, here’s the problem. overall_loss += los&hellip; Hi, .copy_() will not change the contiguity of any Tensor. It will just read the content from b and write it to a. Not changing the size/strides. The problem is that your BN layers differ. I used the following code to solve the problem (just override the train function of your model): def train(self, mode=True, freeze_bn=False, freeze_bn_affine=False): super(MyModel, self).train(mode) if freeze_bn: f&hellip;
1,978
{'text': ['Thank you Olivier for looking into it. Your hunch on the learning rate was in right direction. However, the problem was rather simple. I am not sure anyone can run into this. It may be very basic about pytorch. That being said, at the risk of sounding stupid, here’s the problem.\n\noverall_loss += los&hellip;'], 'answer_start': [1978]}
Copy_() and memory format
I know that in pytorch 1.5 to() and clone() can preserve formats and therefore we can send non-contiguous tensors between devices. I wonder, what is the case for copy_()? can we send non-contagious tensors with it? If not, is there any suggested workaround for avoiding copy? for example a = tor&hellip;
1
2020-05-20T08:38:11.392Z
Hi, .copy_() will not change the contiguity of any Tensor. It will just read the content from b and write it to a. Not changing the size/strides.
1
2020-05-20T15:50:16.320Z
https://discuss.pytorch.org/t/copy-and-memory-format/82136/2
Thank you Olivier for looking into it. Your hunch on the learning rate was in right direction. However, the problem was rather simple. I am not sure anyone can run into this. It may be very basic about pytorch. That being said, at the risk of sounding stupid, here’s the problem. overall_loss += los&hellip; Hi, .copy_() will not change the contiguity of any Tensor. It will just read the content from b and write it to a. Not changing the size/strides. The problem is that your BN layers differ. I used the following code to solve the problem (just override the train function of your model): def train(self, mode=True, freeze_bn=False, freeze_bn_affine=False): super(MyModel, self).train(mode) if freeze_bn: f&hellip;
1,298
{'text': ['Hi,\n\n.copy_() will not change the contiguity of any Tensor.\n\nIt will just read the content from b and write it to a. Not changing the size/strides.'], 'answer_start': [1298]}
[BUG] Weird behavior between evaluation and training mode
Hello There, I have got two networks. The first one is a network initialized with a pre-trained model plus some extra layers defined by me and the second one is the same network but trained for one epoch. It is worthwhile to mention that the pre-trained section of both networks was frozen in order &hellip;
1
2018-02-05T15:46:45.283Z
The problem is that your BN layers differ. I used the following code to solve the problem (just override the train function of your model): def train(self, mode=True, freeze_bn=False, freeze_bn_affine=False): super(MyModel, self).train(mode) if freeze_bn: f&hellip;
5
2018-02-06T13:19:39.013Z
https://discuss.pytorch.org/t/bug-weird-behavior-between-evaluation-and-training-mode/13297/17
Thank you Olivier for looking into it. Your hunch on the learning rate was in right direction. However, the problem was rather simple. I am not sure anyone can run into this. It may be very basic about pytorch. That being said, at the risk of sounding stupid, here’s the problem. overall_loss += los&hellip; Hi, .copy_() will not change the contiguity of any Tensor. It will just read the content from b and write it to a. Not changing the size/strides. The problem is that your BN layers differ. I used the following code to solve the problem (just override the train function of your model): def train(self, mode=True, freeze_bn=False, freeze_bn_affine=False): super(MyModel, self).train(mode) if freeze_bn: f&hellip;
457
{'text': ['The problem is that your BN layers differ.\n\nI used the following code to solve the problem (just override the train function of your model):\n\ndef train(self, mode=True, freeze_bn=False, freeze_bn_affine=False):\n\nsuper(MyModel, self).train(mode)\n\nif freeze_bn:\n\nf&hellip;'], 'answer_start': [457]}
Any different between model(input) and model.forward(input)
class MyModel(nn.Module): def __init__(self, cuda, word_dim, tag_dim, mem_dim, criterion): super(MyModel, self).__init__() def forward(input): .. # do something return output model = MyModel() Is there any different if I called model.forward(input) rather th&hellip;
12
2017-06-04T17:36:32.731Z
You should avoid calling Module.forward. The difference is that all the hooks are dispatched in the __call__ function, so if you call .forward and have hooks in your model, the hooks won’t have any effect
17
2017-06-05T05:18:00.293Z
https://discuss.pytorch.org/t/any-different-between-model-input-and-model-forward-input/3690/2
You should avoid calling Module.forward. The difference is that all the hooks are dispatched in the __call__ function, so if you call .forward and have hooks in your model, the hooks won’t have any effect Thanks <a class="mention" href="/u/ptrblck">@ptrblck</a> for the help, I finally found the issue. [image] These alpha here denotes probability after projection over target vocabulary, I was implementing this equation as is. As you can see there is a summation over multiplication of probabilities due to this alpha values were underflowin&hellip; Hi Sarra, could you use a translation service, please, as my French is quite bad? :stuck_out_tongue: From Deepl: “or I add ( tensor = torch.from_numpy(array)) ? in the source code please ?” If I understand it correctly, you would like to know, where to add this line of code? Try to add it rig&hellip;
1,454
{'text': ['You should avoid calling Module.forward.\n\nThe difference is that all the hooks are dispatched in the __call__ function, so if you call .forward and have hooks in your model, the hooks won’t have any effect'], 'answer_start': [1454]}
Getting NaN values in backward pass
Hi, I am trying to implement <a href="https://arxiv.org/pdf/1811.02172.pdf" rel="nofollow noopener">this</a> paper. My implementation of the paper is <a href="https://github.com/desiredeveloper/npmtplus/blob/master/main.py" rel="nofollow noopener">here</a> for any information about the architecture. I am calculating the loss manually by taking a negative log of the probability. (Probability of the target sequence is calculated by the equation defined in the paper) The e&hellip;
0
2020-06-01T12:25:22.995Z
Thanks <a class="mention" href="/u/ptrblck">@ptrblck</a> for the help, I finally found the issue. [image] These alpha here denotes probability after projection over target vocabulary, I was implementing this equation as is. As you can see there is a summation over multiplication of probabilities due to this alpha values were underflowin&hellip;
2
2020-06-15T07:56:10.173Z
https://discuss.pytorch.org/t/getting-nan-values-in-backward-pass/83696/13
You should avoid calling Module.forward. The difference is that all the hooks are dispatched in the __call__ function, so if you call .forward and have hooks in your model, the hooks won’t have any effect Thanks <a class="mention" href="/u/ptrblck">@ptrblck</a> for the help, I finally found the issue. [image] These alpha here denotes probability after projection over target vocabulary, I was implementing this equation as is. As you can see there is a summation over multiplication of probabilities due to this alpha values were underflowin&hellip; Hi Sarra, could you use a translation service, please, as my French is quite bad? :stuck_out_tongue: From Deepl: “or I add ( tensor = torch.from_numpy(array)) ? in the source code please ?” If I understand it correctly, you would like to know, where to add this line of code? Try to add it rig&hellip;
933
{'text': ['Thanks <a class="mention" href="/u/ptrblck">@ptrblck</a> for the help, I finally found the issue.\n\n[image]\n\nThese alpha here denotes probability after projection over target vocabulary, I was implementing this equation as is.\n\nAs you can see there is a summation over multiplication of probabilities due to this alpha values were underflowin&hellip;'], 'answer_start': [933]}
'numpy.ndarray' object has no attribute 'cuda'
en tapant : biasRestaurant = to_np(m.ib(V(topRestIdx))) #converting the torch embedding to numpy matrix j’aurai cette erreur : AttributeError Traceback (most recent call last) in () ----&gt; 1 biasRestaurant = to_np(m.ib(V(topRestIdx))) #converting the torch embedding to&hellip;
0
2020-05-08T00:15:28.045Z
Hi Sarra, could you use a translation service, please, as my French is quite bad? :stuck_out_tongue: From Deepl: “or I add ( tensor = torch.from_numpy(array)) ? in the source code please ?” If I understand it correctly, you would like to know, where to add this line of code? Try to add it rig&hellip;
2
2020-05-08T20:30:03.565Z
https://discuss.pytorch.org/t/numpy-ndarray-object-has-no-attribute-cuda/80260/4
You should avoid calling Module.forward. The difference is that all the hooks are dispatched in the __call__ function, so if you call .forward and have hooks in your model, the hooks won’t have any effect Thanks <a class="mention" href="/u/ptrblck">@ptrblck</a> for the help, I finally found the issue. [image] These alpha here denotes probability after projection over target vocabulary, I was implementing this equation as is. As you can see there is a summation over multiplication of probabilities due to this alpha values were underflowin&hellip; Hi Sarra, could you use a translation service, please, as my French is quite bad? :stuck_out_tongue: From Deepl: “or I add ( tensor = torch.from_numpy(array)) ? in the source code please ?” If I understand it correctly, you would like to know, where to add this line of code? Try to add it rig&hellip;
556
{'text': ['Hi Sarra,\n\ncould you use a translation service, please, as my French is quite bad? :stuck_out_tongue:\n\nFrom Deepl:\n\n“or I add ( tensor = torch.from_numpy(array)) ? in the source code please ?”\n\nIf I understand it correctly, you would like to know, where to add this line of code?\n\nTry to add it rig&hellip;'], 'answer_start': [556]}
Implementing Neural Style Transfer From Scratch
Hi! I am trying to implement the neural style transfer model from the original Gatys’ paper from scratch. I am aware of the tutorial on the website, but I am trying to implement it myself to see if I understand the model right, also, I am trying to stay as close as possible to the paper. I have com&hellip;
4
2019-04-03T03:23:24.301Z
Looks like, ultimately, the problem was with the content - style loss balance. I also increased the resolution of the images and got some good results. <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/9/945e03a96afdf6b0c5d891d41e50907494c475ef.png" data-download-href="https://discuss.pytorch.org/uploads/default/945e03a96afdf6b0c5d891d41e50907494c475ef" title="iter_9500.png">[iter_9500]</a> <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/5/50a8c29404db35cb528b51042bc76e523f3b4860.png" data-download-href="https://discuss.pytorch.org/uploads/default/50a8c29404db35cb528b51042bc76e523f3b4860" title="iter_14800.png">[iter_14800]</a> Thank you, everyone!
3
2019-04-04T16:09:07.516Z
https://discuss.pytorch.org/t/implementing-neural-style-transfer-from-scratch/41540/11
Looks like, ultimately, the problem was with the content - style loss balance. I also increased the resolution of the images and got some good results. <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/9/945e03a96afdf6b0c5d891d41e50907494c475ef.png" data-download-href="https://discuss.pytorch.org/uploads/default/945e03a96afdf6b0c5d891d41e50907494c475ef" title="iter_9500.png">[iter_9500]</a> <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/5/50a8c29404db35cb528b51042bc76e523f3b4860.png" data-download-href="https://discuss.pytorch.org/uploads/default/50a8c29404db35cb528b51042bc76e523f3b4860" title="iter_14800.png">[iter_14800]</a> Thank you, everyone! The batch_size in my code example would correspond to the batch size you set in your DataLoader. To update the conf mat you would have to pass and return it from the method: def confusion_matrix(preds, labels, conf_matrix): preds = torch.argmax(preds, 1) for p, t in zip(preds, labels): &hellip; Basically you could handle it like a 2-dimensional convolution with another “spatial” dimension. I.e. the target should contain the class indices without a channel dimension in the shape [batch_size, d, h, w]. I’ve created a small dummy example using a simple model to segment a square in the volum&hellip;
1,724
{'text': ['Looks like, ultimately, the problem was with the content - style loss balance. I also increased the resolution of the images and got some good results.\n\n<a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/9/945e03a96afdf6b0c5d891d41e50907494c475ef.png" data-download-href="https://discuss.pytorch.org/uploads/default/945e03a96afdf6b0c5d891d41e50907494c475ef" title="iter_9500.png">[iter_9500]</a> <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/5/50a8c29404db35cb528b51042bc76e523f3b4860.png" data-download-href="https://discuss.pytorch.org/uploads/default/50a8c29404db35cb528b51042bc76e523f3b4860" title="iter_14800.png">[iter_14800]</a>\n\nThank you, everyone!'], 'answer_start': [1724]}
How to check and read Confusion matrix?
This query seems a little odd because I am printing a multi-class Confusion Matrix and what I am getting is not completely understandable for me. I got the code for Confusion matrix from this helpful forum and I have changed a little bit. I have put the whole confusion matrix into a function and I h&hellip;
0
2019-04-06T17:15:20.357Z
The batch_size in my code example would correspond to the batch size you set in your DataLoader. To update the conf mat you would have to pass and return it from the method: def confusion_matrix(preds, labels, conf_matrix): preds = torch.argmax(preds, 1) for p, t in zip(preds, labels): &hellip;
1
2019-04-06T20:14:40.928Z
https://discuss.pytorch.org/t/how-to-check-and-read-confusion-matrix/41835/5
Looks like, ultimately, the problem was with the content - style loss balance. I also increased the resolution of the images and got some good results. <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/9/945e03a96afdf6b0c5d891d41e50907494c475ef.png" data-download-href="https://discuss.pytorch.org/uploads/default/945e03a96afdf6b0c5d891d41e50907494c475ef" title="iter_9500.png">[iter_9500]</a> <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/5/50a8c29404db35cb528b51042bc76e523f3b4860.png" data-download-href="https://discuss.pytorch.org/uploads/default/50a8c29404db35cb528b51042bc76e523f3b4860" title="iter_14800.png">[iter_14800]</a> Thank you, everyone! The batch_size in my code example would correspond to the batch size you set in your DataLoader. To update the conf mat you would have to pass and return it from the method: def confusion_matrix(preds, labels, conf_matrix): preds = torch.argmax(preds, 1) for p, t in zip(preds, labels): &hellip; Basically you could handle it like a 2-dimensional convolution with another “spatial” dimension. I.e. the target should contain the class indices without a channel dimension in the shape [batch_size, d, h, w]. I’ve created a small dummy example using a simple model to segment a square in the volum&hellip;
1,587
{'text': ['The batch_size in my code example would correspond to the batch size you set in your DataLoader.\n\nTo update the conf mat you would have to pass and return it from the method:\n\ndef confusion_matrix(preds, labels, conf_matrix):\n\npreds = torch.argmax(preds, 1)\n\nfor p, t in zip(preds, labels):\n\n&hellip;'], 'answer_start': [1587]}
Understanding how to label/target tensors for 3D volumes
I’ve understood the process of labeling for semantic segmentation for 2D images. I was able to create label or target tensors using a colour coded method provided for the dataset. The colour codes provided were: (&quot;Animal&quot;, np.array([64, 128, 64], dtype=np.uint8)), (&quot;Archway&quot;, np.array([192,&hellip;
0
2018-11-20T19:05:50.737Z
Basically you could handle it like a 2-dimensional convolution with another “spatial” dimension. I.e. the target should contain the class indices without a channel dimension in the shape [batch_size, d, h, w]. I’ve created a small dummy example using a simple model to segment a square in the volum&hellip;
1
2018-11-20T21:01:36.331Z
https://discuss.pytorch.org/t/understanding-how-to-label-target-tensors-for-3d-volumes/30101/2
Looks like, ultimately, the problem was with the content - style loss balance. I also increased the resolution of the images and got some good results. <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/9/945e03a96afdf6b0c5d891d41e50907494c475ef.png" data-download-href="https://discuss.pytorch.org/uploads/default/945e03a96afdf6b0c5d891d41e50907494c475ef" title="iter_9500.png">[iter_9500]</a> <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/5/50a8c29404db35cb528b51042bc76e523f3b4860.png" data-download-href="https://discuss.pytorch.org/uploads/default/50a8c29404db35cb528b51042bc76e523f3b4860" title="iter_14800.png">[iter_14800]</a> Thank you, everyone! The batch_size in my code example would correspond to the batch size you set in your DataLoader. To update the conf mat you would have to pass and return it from the method: def confusion_matrix(preds, labels, conf_matrix): preds = torch.argmax(preds, 1) for p, t in zip(preds, labels): &hellip; Basically you could handle it like a 2-dimensional convolution with another “spatial” dimension. I.e. the target should contain the class indices without a channel dimension in the shape [batch_size, d, h, w]. I’ve created a small dummy example using a simple model to segment a square in the volum&hellip;
1,026
{'text': ['Basically you could handle it like a 2-dimensional convolution with another “spatial” dimension.\n\nI.e. the target should contain the class indices without a channel dimension in the shape [batch_size, d, h, w].\n\nI’ve created a small dummy example using a simple model to segment a square in the volum&hellip;'], 'answer_start': [1026]}
Efficient batch dot product
I have given a batch of row vectors stored in the matrix U, a batch of column vectors stored in the matrix V and a single matrix M. For each row vector u in U and each column vector v in V I want to compute the sum of the matrix product u *M*v for each batch. How can I efficiently implement this (p&hellip;
0
2019-04-01T11:42:03.140Z
torch.einsum(&#39;bi,ij,bj&#39;, U, M, V) if you want the sum, &#39;bi,ij,bj-&gt;b&#39; if you prefer the batch items separately. :slight_smile: Best regards Thomas
3
2019-04-03T20:15:00.976Z
https://discuss.pytorch.org/t/efficient-batch-dot-product/41382/6
torch.einsum(&#39;bi,ij,bj&#39;, U, M, V) if you want the sum, &#39;bi,ij,bj-&gt;b&#39; if you prefer the batch items separately. :slight_smile: Best regards Thomas Based on the backtrace it seems that numpy’s libopenblas creates the seg fault. Did you install numpy with the PyTorch wheels? If not, install it or update to the latest PyTorch release, as recently we’ve found <a href="https://github.com/pytorch/pytorch/issues/66353">this issue</a>, which might be related. copy past the source code from github and make some changes either in that part: self.classifier = nn.Sequential( nn.Linear(512 * 7 * 7, 4096), nn.ReLU(True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(True), nn.Dropout(), &hellip;
2,668
{'text': ['torch.einsum(&#39;bi,ij,bj&#39;, U, M, V) if you want the sum, &#39;bi,ij,bj-&gt;b&#39; if you prefer the batch items separately. :slight_smile:\n\nBest regards\n\nThomas'], 'answer_start': [2668]}
Segmentation Fault when importing PyTorch
When I tried to import PyTorch in python, it crashed with a segfault error: <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/3X/3/4/34b1c75b435bf0e36c28c335327820a2770e0239.png" data-download-href="https://discuss.pytorch.org/uploads/default/34b1c75b435bf0e36c28c335327820a2770e0239" title="image">[image]</a> “Segmentation fault (core dumped)” is all I have about the issue. Since the sys admin is very disagreeable, I have to figure out what the problem is myself. But I really don’t know what the cause of the crash coul&hellip;
1
2021-10-18T04:52:41.677Z
Based on the backtrace it seems that numpy’s libopenblas creates the seg fault. Did you install numpy with the PyTorch wheels? If not, install it or update to the latest PyTorch release, as recently we’ve found <a href="https://github.com/pytorch/pytorch/issues/66353">this issue</a>, which might be related.
0
2021-11-01T10:15:19.635Z
https://discuss.pytorch.org/t/segmentation-fault-when-importing-pytorch/134486/7
torch.einsum(&#39;bi,ij,bj&#39;, U, M, V) if you want the sum, &#39;bi,ij,bj-&gt;b&#39; if you prefer the batch items separately. :slight_smile: Best regards Thomas Based on the backtrace it seems that numpy’s libopenblas creates the seg fault. Did you install numpy with the PyTorch wheels? If not, install it or update to the latest PyTorch release, as recently we’ve found <a href="https://github.com/pytorch/pytorch/issues/66353">this issue</a>, which might be related. copy past the source code from github and make some changes either in that part: self.classifier = nn.Sequential( nn.Linear(512 * 7 * 7, 4096), nn.ReLU(True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(True), nn.Dropout(), &hellip;
1,501
{'text': ['Based on the backtrace it seems that numpy’s libopenblas creates the seg fault. Did you install numpy with the PyTorch wheels? If not, install it or update to the latest PyTorch release, as recently we’ve found <a href="https://github.com/pytorch/pytorch/issues/66353">this issue</a>, which might be related.'], 'answer_start': [1501]}
VGG 16 Architecture
Hello Forum, I wanted to conduct some experiments by trying to tweak the architecture of VGG 16, to try get a sense of author’s intuition. And I am not able to find the code for the pytorch implementation of VGG 16. I only find this link <a href="https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py" rel="nofollow noopener">https://github.com/pytorch/vision/blob/master/torchvision/mo&hellip;</a>
0
2018-10-11T04:12:14.204Z
copy past the source code from github and make some changes either in that part: self.classifier = nn.Sequential( nn.Linear(512 * 7 * 7, 4096), nn.ReLU(True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(True), nn.Dropout(), &hellip;
1
2018-10-11T04:27:13.462Z
https://discuss.pytorch.org/t/vgg-16-architecture/27024/6
torch.einsum(&#39;bi,ij,bj&#39;, U, M, V) if you want the sum, &#39;bi,ij,bj-&gt;b&#39; if you prefer the batch items separately. :slight_smile: Best regards Thomas Based on the backtrace it seems that numpy’s libopenblas creates the seg fault. Did you install numpy with the PyTorch wheels? If not, install it or update to the latest PyTorch release, as recently we’ve found <a href="https://github.com/pytorch/pytorch/issues/66353">this issue</a>, which might be related. copy past the source code from github and make some changes either in that part: self.classifier = nn.Sequential( nn.Linear(512 * 7 * 7, 4096), nn.ReLU(True), nn.Dropout(), nn.Linear(4096, 4096), nn.ReLU(True), nn.Dropout(), &hellip;
476
{'text': ['copy past the source code from github and make some changes either in that part:\n\nself.classifier = nn.Sequential(\n\nnn.Linear(512 * 7 * 7, 4096),\n\nnn.ReLU(True),\n\nnn.Dropout(),\n\nnn.Linear(4096, 4096),\n\nnn.ReLU(True),\n\nnn.Dropout(),\n\n&hellip;'], 'answer_start': [476]}
RuntimeError: expected backend CUDA and dtype Float but got backend CPU and dtype Float
I want to use a custom filter in CNN. The filter has size 5*5 and each entry is a function of three variables: theta, Lambda, psi. There are two convolutional layers followed by two fully connected layers. I tested my filter on MNIST dataset. But when I run it on GPU, I encounter the error: RuntimeE&hellip;
0
2019-08-05T22:11:43.436Z
Try to use nn.Parameter for your return values in whole_filter and one_filter, as this will properly register these filters as internal parameters, and will thus push them also to the GPU in the model.to(device) call.
1
2019-08-05T22:43:04.956Z
https://discuss.pytorch.org/t/runtimeerror-expected-backend-cuda-and-dtype-float-but-got-backend-cpu-and-dtype-float/52617/2
Try to use nn.Parameter for your return values in whole_filter and one_filter, as this will properly register these filters as internal parameters, and will thus push them also to the GPU in the model.to(device) call. Here is a small dummy example using multiple video folders. Note that I’ve used tensors directly, so you should add your frame loading logic into the Dataset. class MyDataset(Dataset): def __init__(self, videos, transform=None, nb_frames=3): self.nb_frames = nb_frames self.tran&hellip; On second thoughts, it is much better to use Python sets. all_params = set(model.parameters()) wd_params = set() for m in model.modules(): if isinstance(m, (nn.Linear, nn.Conv*)): wd_params.add(m.weight) no_wd = all_params - wd_params
1,434
{'text': ['Try to use nn.Parameter for your return values in whole_filter and one_filter, as this will properly register these filters as internal parameters, and will thus push them also to the GPU in the model.to(device) call.'], 'answer_start': [1434]}
A dataloader for multiple similar inputs
Hi All, I have a network that takes three images in the input layer. Now the three images must be frames of the same video (this can be known only from the filename of the image). I understood how to change the dataloader’s __getitem__ to send multiple inputs from <a href="https://discuss.pytorch.org/t/upload-a-customize-data-set-for-multi-regression-task/43413/2">here</a>. But how do I make sure three&hellip;
0
2019-07-22T15:57:36.511Z
Here is a small dummy example using multiple video folders. Note that I’ve used tensors directly, so you should add your frame loading logic into the Dataset. class MyDataset(Dataset): def __init__(self, videos, transform=None, nb_frames=3): self.nb_frames = nb_frames self.tran&hellip;
1
2019-07-28T23:24:31.952Z
https://discuss.pytorch.org/t/a-dataloader-for-multiple-similar-inputs/51284/4
Try to use nn.Parameter for your return values in whole_filter and one_filter, as this will properly register these filters as internal parameters, and will thus push them also to the GPU in the model.to(device) call. Here is a small dummy example using multiple video folders. Note that I’ve used tensors directly, so you should add your frame loading logic into the Dataset. class MyDataset(Dataset): def __init__(self, videos, transform=None, nb_frames=3): self.nb_frames = nb_frames self.tran&hellip; On second thoughts, it is much better to use Python sets. all_params = set(model.parameters()) wd_params = set() for m in model.modules(): if isinstance(m, (nn.Linear, nn.Conv*)): wd_params.add(m.weight) no_wd = all_params - wd_params
935
{'text': ['Here is a small dummy example using multiple video folders.\n\nNote that I’ve used tensors directly, so you should add your frame loading logic into the Dataset.\n\nclass MyDataset(Dataset):\n\ndef __init__(self, videos, transform=None, nb_frames=3):\n\nself.nb_frames = nb_frames\n\nself.tran&hellip;'], 'answer_start': [935]}
Weight decay only for weights of nn.Linear and nn.Conv*
In many of the papers and blogs that I read, for example, the recent <a href="https://arxiv.org/abs/2102.06171" rel="noopener nofollow ugc">NFNet</a> paper, the authors emphasize the importance of only including the convolution &amp; linear layer weights in weight decay. Bias values for all layers, as well as the weight and bias values of normalization layers, e.g., LayerNorm,&hellip;
1
2021-03-10T14:59:39.672Z
On second thoughts, it is much better to use Python sets. all_params = set(model.parameters()) wd_params = set() for m in model.modules(): if isinstance(m, (nn.Linear, nn.Conv*)): wd_params.add(m.weight) no_wd = all_params - wd_params
0
2021-03-15T06:11:54.862Z
https://discuss.pytorch.org/t/weight-decay-only-for-weights-of-nn-linear-and-nn-conv/114348/6
Try to use nn.Parameter for your return values in whole_filter and one_filter, as this will properly register these filters as internal parameters, and will thus push them also to the GPU in the model.to(device) call. Here is a small dummy example using multiple video folders. Note that I’ve used tensors directly, so you should add your frame loading logic into the Dataset. class MyDataset(Dataset): def __init__(self, videos, transform=None, nb_frames=3): self.nb_frames = nb_frames self.tran&hellip; On second thoughts, it is much better to use Python sets. all_params = set(model.parameters()) wd_params = set() for m in model.modules(): if isinstance(m, (nn.Linear, nn.Conv*)): wd_params.add(m.weight) no_wd = all_params - wd_params
510
{'text': ['On second thoughts, it is much better to use Python sets.\n\nall_params = set(model.parameters())\n\nwd_params = set()\n\nfor m in model.modules():\n\nif isinstance(m, (nn.Linear, nn.Conv*)):\n\nwd_params.add(m.weight)\n\nno_wd = all_params - wd_params'], 'answer_start': [510]}
Can i remove a layer from a pre-trained model while loading the model weights?
Hi, I am working on a problem that requires pre-training a first model at the beginning and then using this pre-trained model and fine-tuning it along with a second model. When training the first model, it requires a classification layer in order to compute a loss for it. However, I do not need my &hellip;
0
2019-10-10T17:14:29.765Z
Could you try to save the state_dict instead of the model and optimizer directly? Then while restoring, try to use strict=False in .load_state_dict.
3
2019-10-10T20:47:20.572Z
https://discuss.pytorch.org/t/can-i-remove-a-layer-from-a-pre-trained-model-while-loading-the-model-weights/57899/2
Could you try to save the state_dict instead of the model and optimizer directly? Then while restoring, try to use strict=False in .load_state_dict. So by saying “create a separate Data object for your training, testing and validation data sets.” Do you mean that I could convert my current data construction: data = Data(x = x, edge_index = edge_index, num_classes = 2, ) Into something like: train_&hellip; As <a class="mention" href="/u/ybj14">@ybj14</a> said, the pseudo-random number generator uses the seed as its initial seed and generates all sequential numbers based on this initial seed. That doesn’t mean that every “random” number will have the exact same value (which would create a useless random number generator), but that the sequ&hellip;
1,500
{'text': ['Could you try to save the state_dict instead of the model and optimizer directly?\n\nThen while restoring, try to use strict=False in .load_state_dict.'], 'answer_start': [1500]}
How to define train_mask, val_mask, test_mask, ... in my own dataset?
I’ve tried to build a GCN to train my own data which are nodes with only one feature on each node. However I encountered a problem, how can I define attributes “train_mask”, “test_mask”, “val_mask” like what they have in the built-in dataset? My code: ############################################&hellip;
0
2019-09-18T17:50:04.168Z
So by saying “create a separate Data object for your training, testing and validation data sets.” Do you mean that I could convert my current data construction: data = Data(x = x, edge_index = edge_index, num_classes = 2, ) Into something like: train_&hellip;
0
2019-09-22T18:47:44.064Z
https://discuss.pytorch.org/t/how-to-define-train-mask-val-mask-test-mask-in-my-own-dataset/56289/5
Could you try to save the state_dict instead of the model and optimizer directly? Then while restoring, try to use strict=False in .load_state_dict. So by saying “create a separate Data object for your training, testing and validation data sets.” Do you mean that I could convert my current data construction: data = Data(x = x, edge_index = edge_index, num_classes = 2, ) Into something like: train_&hellip; As <a class="mention" href="/u/ybj14">@ybj14</a> said, the pseudo-random number generator uses the seed as its initial seed and generates all sequential numbers based on this initial seed. That doesn’t mean that every “random” number will have the exact same value (which would create a useless random number generator), but that the sequ&hellip;
900
{'text': ['So by saying “create a separate Data object for your training, testing and validation data sets.”\n\nDo you mean that I could convert my current data construction:\n\ndata = Data(x = x,\n\nedge_index = edge_index,\n\nnum_classes = 2,\n\n)\n\nInto something like:\n\ntrain_&hellip;'], 'answer_start': [900]}
Does PyTorch change its internal seed during training?
I am trying to make my training code as deterministic and reproducible as possible. When running the same training code multiple times, and always re-initialising the model, I get different results - even if I set the seeds manually, before all runs start. I found that when I reset the seed on every&hellip;
1
2019-05-29T06:35:31.489Z
As <a class="mention" href="/u/ybj14">@ybj14</a> said, the pseudo-random number generator uses the seed as its initial seed and generates all sequential numbers based on this initial seed. That doesn’t mean that every “random” number will have the exact same value (which would create a useless random number generator), but that the sequ&hellip;
4
2019-05-29T10:51:15.418Z
https://discuss.pytorch.org/t/does-pytorch-change-its-internal-seed-during-training/46505/4
Could you try to save the state_dict instead of the model and optimizer directly? Then while restoring, try to use strict=False in .load_state_dict. So by saying “create a separate Data object for your training, testing and validation data sets.” Do you mean that I could convert my current data construction: data = Data(x = x, edge_index = edge_index, num_classes = 2, ) Into something like: train_&hellip; As <a class="mention" href="/u/ybj14">@ybj14</a> said, the pseudo-random number generator uses the seed as its initial seed and generates all sequential numbers based on this initial seed. That doesn’t mean that every “random” number will have the exact same value (which would create a useless random number generator), but that the sequ&hellip;
417
{'text': ['As <a class="mention" href="/u/ybj14">@ybj14</a> said, the pseudo-random number generator uses the seed as its initial seed and generates all sequential numbers based on this initial seed.\n\nThat doesn’t mean that every “random” number will have the exact same value (which would create a useless random number generator), but that the sequ&hellip;'], 'answer_start': [417]}
Cannot freeze batch normalization parameters
during training my model i am making some of the layers not trainable via: for param in model.parameters(): param.requires_grad = False however after checking the parameters i see there are a lot of parameters that still train and change such as: extras.0.conv.7.running_var extras.1.conv&hellip;
0
2019-03-01T23:31:26.711Z
I was dealing with that ***** the whole day, finally i think i got it, adding this will make BN not trainable: def set_bn_eval(m): classname = m.__class__.__name__ if classname.find(&#39;BatchNorm2d&#39;) != -1: m.eval() model.apply(set_bn_eval)
2
2019-03-02T00:09:10.040Z
https://discuss.pytorch.org/t/cannot-freeze-batch-normalization-parameters/38696/2
I was dealing with that ***** the whole day, finally i think i got it, adding this will make BN not trainable: def set_bn_eval(m): classname = m.__class__.__name__ if classname.find(&#39;BatchNorm2d&#39;) != -1: m.eval() model.apply(set_bn_eval) Hi, Current binaries for cuda 11.0 will work with these cards There are perf issues with these because the current cuda libraries are not properly optimized for them. We will release 1.7.1 soon to update to cudnn 8.0.5 to fix some of these. But it won’t fix everything I’m afraid and we’ll have to &hellip; Thanks alot the Custom data worked but i i had to pre-process the image manually using a loop to crop the required images in the range of 50 then i applied the transform and my custom dataset is as follows: num_classes = 2 Class that reads a sequence of image paths from a directory and creates a d&hellip;
1,528
{'text': ['I was dealing with that ***** the whole day, finally i think i got it, adding this will make BN not trainable:\n\ndef set_bn_eval(m):\n\nclassname = m.__class__.__name__\n\nif classname.find(&#39;BatchNorm2d&#39;) != -1:\n\nm.eval()\n\nmodel.apply(set_bn_eval)'], 'answer_start': [1528]}
Rtx 3070/3080 support
Hello! So I’ve got a machine with ubuntu 20.04 and rtx 3070. Is it possible to run pytorch at this time with support to the new GPUs? From my understanding for the rtx 3070 I need cudnn 8.0.5 and cuda 11.1, is there a way to get pytorch to work this these versions? What are my current options to&hellip;
0
2020-12-02T19:40:30.691Z
Hi, Current binaries for cuda 11.0 will work with these cards There are perf issues with these because the current cuda libraries are not properly optimized for them. We will release 1.7.1 soon to update to cudnn 8.0.5 to fix some of these. But it won’t fix everything I’m afraid and we’ll have to &hellip;
2
2020-12-02T19:45:55.726Z
https://discuss.pytorch.org/t/rtx-3070-3080-support/104895/2
I was dealing with that ***** the whole day, finally i think i got it, adding this will make BN not trainable: def set_bn_eval(m): classname = m.__class__.__name__ if classname.find(&#39;BatchNorm2d&#39;) != -1: m.eval() model.apply(set_bn_eval) Hi, Current binaries for cuda 11.0 will work with these cards There are perf issues with these because the current cuda libraries are not properly optimized for them. We will release 1.7.1 soon to update to cudnn 8.0.5 to fix some of these. But it won’t fix everything I’m afraid and we’ll have to &hellip; Thanks alot the Custom data worked but i i had to pre-process the image manually using a loop to crop the required images in the range of 50 then i applied the transform and my custom dataset is as follows: num_classes = 2 Class that reads a sequence of image paths from a directory and creates a d&hellip;
1,015
{'text': ['Hi,\n\nCurrent binaries for cuda 11.0 will work with these cards\n\nThere are perf issues with these because the current cuda libraries are not properly optimized for them. We will release 1.7.1 soon to update to cudnn 8.0.5 to fix some of these. But it won’t fix everything I’m afraid and we’ll have to &hellip;'], 'answer_start': [1015]}
*Please Help: Data Loader for Image and Mask*
Hi everyone, I am currently developing a meta learning for semantic segmentation using MAML approach and my dataset comprises of an image and its mask with tif format. My file path is ,/dataset&gt; Train, Test and Validate and each has a sub-folder of image_folder and mask_folder. Am writing a custom&hellip;
1
2020-06-23T19:16:18.322Z
Thanks alot the Custom data worked but i i had to pre-process the image manually using a loop to crop the required images in the range of 50 then i applied the transform and my custom dataset is as follows: num_classes = 2 Class that reads a sequence of image paths from a directory and creates a d&hellip;
0
2020-07-10T05:45:13.828Z
https://discuss.pytorch.org/t/please-help-data-loader-for-image-and-mask/86602/12
I was dealing with that ***** the whole day, finally i think i got it, adding this will make BN not trainable: def set_bn_eval(m): classname = m.__class__.__name__ if classname.find(&#39;BatchNorm2d&#39;) != -1: m.eval() model.apply(set_bn_eval) Hi, Current binaries for cuda 11.0 will work with these cards There are perf issues with these because the current cuda libraries are not properly optimized for them. We will release 1.7.1 soon to update to cudnn 8.0.5 to fix some of these. But it won’t fix everything I’m afraid and we’ll have to &hellip; Thanks alot the Custom data worked but i i had to pre-process the image manually using a loop to crop the required images in the range of 50 then i applied the transform and my custom dataset is as follows: num_classes = 2 Class that reads a sequence of image paths from a directory and creates a d&hellip;
560
{'text': ['Thanks alot the Custom data worked but i i had to pre-process the image manually using a loop to crop the required images in the range of 50 then i applied the transform and my custom dataset is as follows:\n\nnum_classes = 2\n\nClass that reads a sequence of image paths from a directory and creates a d&hellip;'], 'answer_start': [560]}
How to rearrange this tensor?
Hello, I am searching around for a while, but still can not solve the problem I have at hand. Could somebody point me to the solution, or give an suggestion to this question? I have a tensor, A = [[row1], B = [1, 2, 1] [row2], [row3]] tensor B indicates which group does each row of A belong&hellip;
0
2019-08-21T09:36:54.850Z
Try a = torch.tensor([[1,2,3],[4,5,6],[7,8,9]]) b = torch.tensor([1,2,1]) _, inds = torch.sort(b) rearranged_a = a[inds]
0
2019-08-21T10:11:17.309Z
https://discuss.pytorch.org/t/how-to-rearrange-this-tensor/53918/4
Try a = torch.tensor([[1,2,3],[4,5,6],[7,8,9]]) b = torch.tensor([1,2,1]) _, inds = torch.sort(b) rearranged_a = a[inds] It’s just that your sampler should be a subclass of the original torch.utils.data.Sampler as the error states. You can fix that by setting class SequentialSampler2(torch.utils.data.Sampler): to inherit from it. and add super().__init__() in the first line of your init function to make sure to init&hellip; <a class="mention" href="/u/odats">@odats</a> can you set a seed each epoch for that ? @trainer.on(Events.EPOCH_STARTED) def set_epoch_seed(): set_seed(trainer.state.epoch) If this does not work for you, please provide a minimal code snippet to see the problem.
1,736
{'text': ['Try\n\na = torch.tensor([[1,2,3],[4,5,6],[7,8,9]])\n\nb = torch.tensor([1,2,1])\n\n_, inds = torch.sort(b)\n\nrearranged_a = a[inds]'], 'answer_start': [1736]}
Resume iterating dataloader from checkpoint batch_idx
Hi, I was wondering whether it is possible to resume iterating through a dataloader from a checkpoint. For example: dataloaders_dict = {phase: torch.utils.data.DataLoader(datasets_dict[phase], batch_size=args.batch_size, num_workers=args.num_workers, shuffle=False) for phase in [&#39;train&#39;]} # m&hellip;
2
2019-11-12T02:07:55.522Z
It’s just that your sampler should be a subclass of the original torch.utils.data.Sampler as the error states. You can fix that by setting class SequentialSampler2(torch.utils.data.Sampler): to inherit from it. and add super().__init__() in the first line of your init function to make sure to init&hellip;
2
2019-11-12T20:12:18.446Z
https://discuss.pytorch.org/t/resume-iterating-dataloader-from-checkpoint-batch-idx/60683/4
Try a = torch.tensor([[1,2,3],[4,5,6],[7,8,9]]) b = torch.tensor([1,2,1]) _, inds = torch.sort(b) rearranged_a = a[inds] It’s just that your sampler should be a subclass of the original torch.utils.data.Sampler as the error states. You can fix that by setting class SequentialSampler2(torch.utils.data.Sampler): to inherit from it. and add super().__init__() in the first line of your init function to make sure to init&hellip; <a class="mention" href="/u/odats">@odats</a> can you set a seed each epoch for that ? @trainer.on(Events.EPOCH_STARTED) def set_epoch_seed(): set_seed(trainer.state.epoch) If this does not work for you, please provide a minimal code snippet to see the problem.
993
{'text': ['It’s just that your sampler should be a subclass of the original torch.utils.data.Sampler as the error states.\n\nYou can fix that by setting class SequentialSampler2(torch.utils.data.Sampler): to inherit from it. and add super().__init__() in the first line of your init function to make sure to init&hellip;'], 'answer_start': [993]}
How to set the same random seed for all workers?
Without setting a random seed the data loader returns the same random data for each epoch: epoch 1: worker1-&gt;[2], worker2-&gt;[2], epoch 2: worker1-&gt;[2], worker2-&gt;[2],... When I set a random seed in worker_init_fn function I get random data for each worker: epoch 1: worker1-&gt;[2], worker2-&gt;[4], epoc&hellip;
1
2020-08-10T08:56:15.098Z
<a class="mention" href="/u/odats">@odats</a> can you set a seed each epoch for that ? @trainer.on(Events.EPOCH_STARTED) def set_epoch_seed(): set_seed(trainer.state.epoch) If this does not work for you, please provide a minimal code snippet to see the problem.
0
2020-08-10T11:40:17.794Z
https://discuss.pytorch.org/t/how-to-set-the-same-random-seed-for-all-workers/92253/2
Try a = torch.tensor([[1,2,3],[4,5,6],[7,8,9]]) b = torch.tensor([1,2,1]) _, inds = torch.sort(b) rearranged_a = a[inds] It’s just that your sampler should be a subclass of the original torch.utils.data.Sampler as the error states. You can fix that by setting class SequentialSampler2(torch.utils.data.Sampler): to inherit from it. and add super().__init__() in the first line of your init function to make sure to init&hellip; <a class="mention" href="/u/odats">@odats</a> can you set a seed each epoch for that ? @trainer.on(Events.EPOCH_STARTED) def set_epoch_seed(): set_seed(trainer.state.epoch) If this does not work for you, please provide a minimal code snippet to see the problem.
434
{'text': ['<a class="mention" href="/u/odats">@odats</a> can you set a seed each epoch for that ?\n\[email protected](Events.EPOCH_STARTED)\n\ndef set_epoch_seed():\n\nset_seed(trainer.state.epoch)\n\nIf this does not work for you, please provide a minimal code snippet to see the problem.'], 'answer_start': [434]}
One-hot encoding with autograd (Dice loss)
Hi, I want to implement a dice loss for multi-class segmentation, my solution requires to encode the target tensor with one-hot encoding because I am working on a multi label problem. If you have a better solution than this, please feel free to share it. This loss function needs to be differentiab&hellip;
0
2017-11-10T17:08:06.709Z
Finally got something to work : def dice_loss(output, target, weights=None, ignore_index=None): &quot;&quot;&quot; output : NxCxHxW Variable target : NxHxW LongTensor weights : C FloatTensor ignore_index : int index to ignore from loss &quot;&quot;&quot; eps = 0.0001 output = output.exp() e&hellip;
1
2017-11-14T14:47:16.436Z
https://discuss.pytorch.org/t/one-hot-encoding-with-autograd-dice-loss/9781/5
Finally got something to work : def dice_loss(output, target, weights=None, ignore_index=None): &quot;&quot;&quot; output : NxCxHxW Variable target : NxHxW LongTensor weights : C FloatTensor ignore_index : int index to ignore from loss &quot;&quot;&quot; eps = 0.0001 output = output.exp() e&hellip; I quickly created a <a href="https://gist.github.com/justusschock/6f9c55e423db2f39e9ca93100a74b515" rel="nofollow noopener">gist</a>. However the code is not brandnew so i cannot guarantee it to work with the latest pytorch version. Maybe you have to do some minor changes to get it run. I am also unsure about the imports but I think I covered everything I used (and maybe a bit too much) From the stack trace it looks like the problem is with the outputs no? Maybe your forward returns Tensors that are not on the right device?
1,398
{'text': ['Finally got something to work :\n\ndef dice_loss(output, target, weights=None, ignore_index=None):\n\n&quot;&quot;&quot;\n\noutput : NxCxHxW Variable\n\ntarget : NxHxW LongTensor\n\nweights : C FloatTensor\n\nignore_index : int index to ignore from loss\n\n&quot;&quot;&quot;\n\neps = 0.0001\n\noutput = output.exp()\n\ne&hellip;'], 'answer_start': [1398]}
ImageFolder data shuffle?
Hello everybody, Does any function exists in pytorch that shuffles data before DataLoader()? I spent some time in search and still can not normally shuffle my output from ImageFolder() :face_with_raised_eyebrow: Thanks, Anton
1
2018-05-08T11:25:16.809Z
I quickly created a <a href="https://gist.github.com/justusschock/6f9c55e423db2f39e9ca93100a74b515" rel="nofollow noopener">gist</a>. However the code is not brandnew so i cannot guarantee it to work with the latest pytorch version. Maybe you have to do some minor changes to get it run. I am also unsure about the imports but I think I covered everything I used (and maybe a bit too much)
1
2018-05-08T13:06:21.620Z
https://discuss.pytorch.org/t/imagefolder-data-shuffle/17731/10
Finally got something to work : def dice_loss(output, target, weights=None, ignore_index=None): &quot;&quot;&quot; output : NxCxHxW Variable target : NxHxW LongTensor weights : C FloatTensor ignore_index : int index to ignore from loss &quot;&quot;&quot; eps = 0.0001 output = output.exp() e&hellip; I quickly created a <a href="https://gist.github.com/justusschock/6f9c55e423db2f39e9ca93100a74b515" rel="nofollow noopener">gist</a>. However the code is not brandnew so i cannot guarantee it to work with the latest pytorch version. Maybe you have to do some minor changes to get it run. I am also unsure about the imports but I think I covered everything I used (and maybe a bit too much) From the stack trace it looks like the problem is with the outputs no? Maybe your forward returns Tensors that are not on the right device?
1,010
{'text': ['I quickly created a <a href="https://gist.github.com/justusschock/6f9c55e423db2f39e9ca93100a74b515" rel="nofollow noopener">gist</a>. However the code is not brandnew so i cannot guarantee it to work with the latest pytorch version. Maybe you have to do some minor changes to get it run.\n\nI am also unsure about the imports but I think I covered everything I used (and maybe a bit too much)'], 'answer_start': [1010]}
nn.DataParallel: TypeError: expected sequence object with len >= 0 or a single integer
In my forward function: def __call__(self, train=True): if train: predicted = self.forward(...) loss = .... return loss # return a single value that&#39;s fine # loss.size() = the number of my GPUs. else: predicted = self.forward(...) return predi&hellip;
0
2020-09-22T06:01:35.001Z
From the stack trace it looks like the problem is with the outputs no? Maybe your forward returns Tensors that are not on the right device?
0
2020-09-25T15:56:49.242Z
https://discuss.pytorch.org/t/nn-dataparallel-typeerror-expected-sequence-object-with-len-0-or-a-single-integer/97082/23
Finally got something to work : def dice_loss(output, target, weights=None, ignore_index=None): &quot;&quot;&quot; output : NxCxHxW Variable target : NxHxW LongTensor weights : C FloatTensor ignore_index : int index to ignore from loss &quot;&quot;&quot; eps = 0.0001 output = output.exp() e&hellip; I quickly created a <a href="https://gist.github.com/justusschock/6f9c55e423db2f39e9ca93100a74b515" rel="nofollow noopener">gist</a>. However the code is not brandnew so i cannot guarantee it to work with the latest pytorch version. Maybe you have to do some minor changes to get it run. I am also unsure about the imports but I think I covered everything I used (and maybe a bit too much) From the stack trace it looks like the problem is with the outputs no? Maybe your forward returns Tensors that are not on the right device?
702
{'text': ['From the stack trace it looks like the problem is with the outputs no?\n\nMaybe your forward returns Tensors that are not on the right device?'], 'answer_start': [702]}
Tensor.to() do NOT retain requires_grad info?
HI, I found a wired bug: In [1]: import torch In [2]: a=torch.tensor([2], requires_grad=True) In [3]: b=a.to(&#39;cuda&#39;) In [4]: a.requires_grad Out[4]: True In [5]: b.requires_grad Out[5]: False Why b do NOT keep requires_grad info from a ? Besides, torch.to() seemingly not a in-place operation&hellip;
1
2018-05-02T10:50:06.469Z
Oh, I see the issue. The tensor you created is not floating point, if you create a floating point tensor torch.tensor([2.], requires_grad=True) it works as expected. We recently merged some code that makes non-floating-point tensor calculations not require grad (I don’t know if that changed this s&hellip;
2
2018-05-02T15:39:21.050Z
https://discuss.pytorch.org/t/tensor-to-do-not-retain-requires-grad-info/17353/4
Oh, I see the issue. The tensor you created is not floating point, if you create a floating point tensor torch.tensor([2.], requires_grad=True) it works as expected. We recently merged some code that makes non-floating-point tensor calculations not require grad (I don’t know if that changed this s&hellip; Soo the point is that original vgg was using that size. Network is “used to see” objects whose sizes are contained in a 112x112 image. There is something called receptive field (rather than boring you with a shitty explanation I will link to a blog <a href="https://towardsdatascience.com/understand-local-receptive-fields-in-convolutional-neural-networks-f26d700be16c" rel="nofollow noopener">https://towardsdatascience.com/understand-local-re&hellip;</a> Like I said it is due to a stupid programming mistake. Since I am assigning the weights to the variables before and after I am accessing the same object using a reference. So both weight matrices are exactly the same. So logically if I subtract them from another the result will always be zero. What &hellip;
1,684
{'text': ['Oh, I see the issue. The tensor you created is not floating point, if you create a floating point tensor torch.tensor([2.], requires_grad=True) it works as expected. We recently merged some code that makes non-floating-point tensor calculations not require grad (I don’t know if that changed this s&hellip;'], 'answer_start': [1684]}
RuntimeError: DataLoader worker (pid 27351) is killed by signal: Killed
I’m running the data loader below which applies a filter to a microscopy image prior to training. In order to count the red and green. This code filters the red cells. Since I have applied this to the code I keep on getting the error message above. I have tried increasing the memory allocation to th&hellip;
0
2020-08-03T09:30:15.568Z
Soo the point is that original vgg was using that size. Network is “used to see” objects whose sizes are contained in a 112x112 image. There is something called receptive field (rather than boring you with a shitty explanation I will link to a blog <a href="https://towardsdatascience.com/understand-local-receptive-fields-in-convolutional-neural-networks-f26d700be16c" rel="nofollow noopener">https://towardsdatascience.com/understand-local-re&hellip;</a>
1
2020-08-04T09:37:39.368Z
https://discuss.pytorch.org/t/runtimeerror-dataloader-worker-pid-27351-is-killed-by-signal-killed/91457/15
Oh, I see the issue. The tensor you created is not floating point, if you create a floating point tensor torch.tensor([2.], requires_grad=True) it works as expected. We recently merged some code that makes non-floating-point tensor calculations not require grad (I don’t know if that changed this s&hellip; Soo the point is that original vgg was using that size. Network is “used to see” objects whose sizes are contained in a 112x112 image. There is something called receptive field (rather than boring you with a shitty explanation I will link to a blog <a href="https://towardsdatascience.com/understand-local-receptive-fields-in-convolutional-neural-networks-f26d700be16c" rel="nofollow noopener">https://towardsdatascience.com/understand-local-re&hellip;</a> Like I said it is due to a stupid programming mistake. Since I am assigning the weights to the variables before and after I am accessing the same object using a reference. So both weight matrices are exactly the same. So logically if I subtract them from another the result will always be zero. What &hellip;
1,151
{'text': ['Soo the point is that original vgg was using that size. Network is “used to see” objects whose sizes are contained in a 112x112 image.\n\nThere is something called receptive field (rather than boring you with a shitty explanation I will link to a blog <a href="https://towardsdatascience.com/understand-local-receptive-fields-in-convolutional-neural-networks-f26d700be16c" rel="nofollow noopener">https://towardsdatascience.com/understand-local-re&hellip;</a>'], 'answer_start': [1151]}
Embeddings not getting updated
# Create a new model to update the embeddings according to the requirement class Modeler(nn.Module): def __init__(self, embed, vocab_size, embed_dim, keyword): super(Modeler, self).__init__() self.embeddings = nn.Embedding(vocab_size, embed_dim) self.embeddi&hellip;
0
2017-06-07T08:35:05.657Z
Like I said it is due to a stupid programming mistake. Since I am assigning the weights to the variables before and after I am accessing the same object using a reference. So both weight matrices are exactly the same. So logically if I subtract them from another the result will always be zero. What &hellip;
1
2017-12-13T16:05:54.747Z
https://discuss.pytorch.org/t/embeddings-not-getting-updated/3796/10
Oh, I see the issue. The tensor you created is not floating point, if you create a floating point tensor torch.tensor([2.], requires_grad=True) it works as expected. We recently merged some code that makes non-floating-point tensor calculations not require grad (I don’t know if that changed this s&hellip; Soo the point is that original vgg was using that size. Network is “used to see” objects whose sizes are contained in a 112x112 image. There is something called receptive field (rather than boring you with a shitty explanation I will link to a blog <a href="https://towardsdatascience.com/understand-local-receptive-fields-in-convolutional-neural-networks-f26d700be16c" rel="nofollow noopener">https://towardsdatascience.com/understand-local-re&hellip;</a> Like I said it is due to a stupid programming mistake. Since I am assigning the weights to the variables before and after I am accessing the same object using a reference. So both weight matrices are exactly the same. So logically if I subtract them from another the result will always be zero. What &hellip;
767
{'text': ['Like I said it is due to a stupid programming mistake. Since I am assigning the weights to the variables before and after I am accessing the same object using a reference. So both weight matrices are exactly the same. So logically if I subtract them from another the result will always be zero. What &hellip;'], 'answer_start': [767]}
How to concat two sequential()?
Suppose I define two sequential(): a = nn.Sequential( ... ) b = nn.Sequential( ... ) Now I want to define another Sequential() that can concat the two Sequential() a and b above. What am I supposed to do?
1
2019-07-15T06:34:32.829Z
Oh, you want the tensors concatenated? I think just running the two and then concatenating the result is the best option. Best regards Thomas
1
2019-07-15T07:23:41.921Z
https://discuss.pytorch.org/t/how-to-concat-two-sequential/50621/6
Oh, you want the tensors concatenated? I think just running the two and then concatenating the result is the best option. Best regards Thomas Hi, For vgg-16 available in torchvision.models when you call list(vgg16_model.children())[:-1] it will remove whole nn.Sequential defined as following: Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.5, inplace=False) (3): L&hellip; You could just add the parameters lists: optimizer = optim.SGD(list(modelA.parameters()) + list(modelB.parameters()), lr=1e-3) How are you transferring the parameters from layer A2 to B1? If so, the weight matrix will have a size mismatch ([30, 8] vs. [40, 8]).
2,150
{'text': ['Oh, you want the tensors concatenated?\n\nI think just running the two and then concatenating the result is the best option.\n\nBest regards\n\nThomas'], 'answer_start': [2150]}
Using pretrained VGG-16 to get a feature vector from an image
Hi, I want to get a feature vector out of an image by passing the image through a pre-trained VGG-16. I used the pretrained Resnet50 to get a feature vector and that worked perfectly. But when I use the same method to get a feature vector from the VGG-16 network, I don’t get the 4096-d vector which&hellip;
0
2020-04-13T04:51:46.966Z
Hi, For vgg-16 available in torchvision.models when you call list(vgg16_model.children())[:-1] it will remove whole nn.Sequential defined as following: Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.5, inplace=False) (3): L&hellip;
2
2020-04-13T09:38:24.949Z
https://discuss.pytorch.org/t/using-pretrained-vgg-16-to-get-a-feature-vector-from-an-image/76496/2
Oh, you want the tensors concatenated? I think just running the two and then concatenating the result is the best option. Best regards Thomas Hi, For vgg-16 available in torchvision.models when you call list(vgg16_model.children())[:-1] it will remove whole nn.Sequential defined as following: Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.5, inplace=False) (3): L&hellip; You could just add the parameters lists: optimizer = optim.SGD(list(modelA.parameters()) + list(modelB.parameters()), lr=1e-3) How are you transferring the parameters from layer A2 to B1? If so, the weight matrix will have a size mismatch ([30, 8] vs. [40, 8]).
1,220
{'text': ['Hi,\n\nFor vgg-16 available in torchvision.models when you call list(vgg16_model.children())[:-1] it will remove whole nn.Sequential defined as following:\n\nSequential(\n\n(0): Linear(in_features=25088, out_features=4096, bias=True)\n\n(1): ReLU(inplace=True)\n\n(2): Dropout(p=0.5, inplace=False)\n\n(3): L&hellip;'], 'answer_start': [1220]}
Merging two models
I want to implement a model similar to the one described in the picture below taken from <a href="https://datascience.stackexchange.com/questions/26103/merging-two-different-models-in-keras" rel="nofollow noopener">https://datascience.stackexchange.com/questions/26103/merging-two-different-models-in-keras</a> <a class="lightbox" href="https://discuss.pytorch.org/uploads/default/original/2X/e/ee94c295235fb52a2cdcb10c526406f93ea3791b.jpeg" data-download-href="https://discuss.pytorch.org/uploads/default/ee94c295235fb52a2cdcb10c526406f93ea3791b" title="mergedmodels.jpg">[mergedmodels]</a> I have implementations of ModelA and ModelB that work fine when I train them separately. I am thinkin&hellip;
0
2019-05-19T17:08:26.119Z
You could just add the parameters lists: optimizer = optim.SGD(list(modelA.parameters()) + list(modelB.parameters()), lr=1e-3) How are you transferring the parameters from layer A2 to B1? If so, the weight matrix will have a size mismatch ([30, 8] vs. [40, 8]).
5
2019-05-19T17:45:56.579Z
https://discuss.pytorch.org/t/merging-two-models/45637/2
Oh, you want the tensors concatenated? I think just running the two and then concatenating the result is the best option. Best regards Thomas Hi, For vgg-16 available in torchvision.models when you call list(vgg16_model.children())[:-1] it will remove whole nn.Sequential defined as following: Sequential( (0): Linear(in_features=25088, out_features=4096, bias=True) (1): ReLU(inplace=True) (2): Dropout(p=0.5, inplace=False) (3): L&hellip; You could just add the parameters lists: optimizer = optim.SGD(list(modelA.parameters()) + list(modelB.parameters()), lr=1e-3) How are you transferring the parameters from layer A2 to B1? If so, the weight matrix will have a size mismatch ([30, 8] vs. [40, 8]).
450
{'text': ['You could just add the parameters lists:\n\noptimizer = optim.SGD(list(modelA.parameters()) + list(modelB.parameters()), lr=1e-3)\n\nHow are you transferring the parameters from layer A2 to B1? If so, the weight matrix will have a size mismatch ([30, 8] vs. [40, 8]).'], 'answer_start': [450]}
Pytorch trained model on Webcam
Is there any way to do real time image classification with webcam using Pytorch trained model?
0
2018-08-26T18:51:48.836Z
I’ve checked another possibility and this is most likely the issue. In your normalization, you have an additional zero for the third channel, which results in the inf values: transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0,0.225]) Remove the zero and try it again.
1
2018-08-31T11:48:04.667Z
https://discuss.pytorch.org/t/pytorch-trained-model-on-webcam/23928/11
I’ve checked another possibility and this is most likely the issue. In your normalization, you have an additional zero for the third channel, which results in the inf values: transforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0,0.225]) Remove the zero and try it again. Looks good to me. To search for the problematic part, could you repeat this procedure with random tensors as input, i.e. don’t use your Dataset and DataLoader? Since you are seeding, the random tensor should be the same in each run. Ah, sorry, softmax dim option cannot take tuple but only int, so you have to flatten your image before computing softmax, something like this should do the trick: decoder_shape = decoder.shape flatten_decoder = decoder.view(decoder_shape[0], decoder_shape[1], -1) faltten_heat_map = torch.nn.Softmax&hellip;
1,426
{'text': ['I’ve checked another possibility and this is most likely the issue.\n\nIn your normalization, you have an additional zero for the third channel, which results in the inf values:\n\ntransforms.Normalize([0.485,0.456,0.406],[0.229,0.224,0,0.225])\n\nRemove the zero and try it again.'], 'answer_start': [1426]}