Upload GPTJForCausalLM
4d6cdb2
-
1.48 kB
initial commit
-
977 Bytes
Upload GPTJForCausalLM
model.pt
Detected Pickle imports (27)
- "transformers.models.gptj.configuration_gptj.GPTJConfig",
- "torch.nn.modules.dropout.Dropout",
- "torch.nn.modules.linear.Linear",
- "torch.nn.modules.sparse.Embedding",
- "torch.qint8",
- "torch._utils._rebuild_tensor_v2",
- "torch.nn.modules.normalization.LayerNorm",
- "torch.DoubleStorage",
- "__builtin__.set",
- "torch.LongStorage",
- "torch.BoolStorage",
- "transformers.models.gptj.modeling_gptj.GPTJBlock",
- "torch.nn.modules.container.ModuleList",
- "transformers.models.gptj.modeling_gptj.GPTJMLP",
- "torch.per_channel_affine",
- "torch.nn.quantized.modules.linear.LinearPackedParams",
- "transformers.activations.NewGELUActivation",
- "transformers.models.gptj.modeling_gptj.GPTJModel",
- "transformers.models.gptj.modeling_gptj.GPTJForCausalLM",
- "torch.QInt8Storage",
- "transformers.models.gptj.modeling_gptj.GPTJAttention",
- "torch.FloatStorage",
- "torch._utils._rebuild_parameter",
- "collections.OrderedDict",
- "torch.nn.quantized.dynamic.modules.linear.Linear",
- "torch.ScriptObject",
- "torch._utils._rebuild_qtensor"
How to fix it?
7.31 GB
Upload with huggingface_hub
-
954 MB
Upload GPTJForCausalLM
-
944 MB
Upload GPTJForCausalLM
-
977 MB
Upload GPTJForCausalLM
-
910 MB
Upload GPTJForCausalLM
-
944 MB
Upload GPTJForCausalLM
-
977 MB
Upload GPTJForCausalLM
-
910 MB
Upload GPTJForCausalLM
-
944 MB
Upload GPTJForCausalLM
-
977 MB
Upload GPTJForCausalLM
-
910 MB
Upload GPTJForCausalLM
-
944 MB
Upload GPTJForCausalLM
-
977 MB
Upload GPTJForCausalLM
-
782 MB
Upload GPTJForCausalLM
-
25.8 kB
Upload GPTJForCausalLM
state-dict.pt
Detected Pickle imports (11)
- "torch.per_channel_affine",
- "torch.BoolStorage",
- "collections.OrderedDict",
- "torch.LongStorage",
- "torch.QInt8Storage",
- "torch.FloatStorage",
- "torch._utils._rebuild_tensor_v2",
- "torch.DoubleStorage",
- "torch._utils._rebuild_qtensor",
- "torch._utils._rebuild_parameter",
- "torch.qint8"
How to fix it?
7.31 GB
Upload with huggingface_hub