Upload tokenizer
de8638f
-
1.48 kB
initial commit
-
125 Bytes
Update README.md
-
735 Bytes
Upload RobertaForSequenceClassification
-
456 kB
Upload tokenizer
-
499 MB
Upload RobertaForSequenceClassification
saved_model_20230312165122.pt
Detected Pickle imports (25)
- "__builtin__.set",
- "torch.LongStorage",
- "transformers.models.roberta.modeling_roberta.RobertaModel",
- "collections.OrderedDict",
- "transformers.models.roberta.modeling_roberta.RobertaSelfAttention",
- "transformers.models.roberta.modeling_roberta.RobertaIntermediate",
- "torch.FloatStorage",
- "torch._utils._rebuild_parameter",
- "transformers.activations.GELUActivation",
- "torch._utils._rebuild_tensor_v2",
- "transformers.models.roberta.modeling_roberta.RobertaAttention",
- "torch._C._nn.gelu",
- "torch.nn.modules.container.ModuleList",
- "transformers.models.roberta.configuration_roberta.RobertaConfig",
- "transformers.models.roberta.modeling_roberta.RobertaClassificationHead",
- "torch.nn.modules.normalization.LayerNorm",
- "transformers.models.roberta.modeling_roberta.RobertaEncoder",
- "torch.nn.modules.dropout.Dropout",
- "transformers.models.roberta.modeling_roberta.RobertaOutput",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.roberta.modeling_roberta.RobertaSelfOutput",
- "transformers.models.roberta.modeling_roberta.RobertaEmbeddings",
- "transformers.models.roberta.modeling_roberta.RobertaForSequenceClassification",
- "transformers.models.roberta.modeling_roberta.RobertaLayer",
- "torch.nn.modules.linear.Linear"
How to fix it?
499 MB
Upload saved_model_20230312165122.pt
-
280 Bytes
Upload tokenizer
-
2.11 MB
Upload tokenizer
-
380 Bytes
Upload tokenizer
-
798 kB
Upload tokenizer