Update README.md
2f963e2
verified
-
1.52 kB
initial commit
-
352 Bytes
Update README.md
-
16.8 GB
Uploading VAE in neuro-symbolic-ai/eb-langcvae-flan_t5-llama3_8b
encoder.pkl
Detected Pickle imports (24)
- "transformers.activations.NewGELUActivation",
- "torch.storage._load_from_bytes",
- "transformers.models.t5.modeling_t5.T5LayerNorm",
- "torch.nn.modules.sparse.Embedding",
- "transformers.models.t5.modeling_t5.T5Attention",
- "tokenizers.models.Model",
- "torch._utils._rebuild_parameter",
- "transformers.tokenization_utils_fast.PreTrainedTokenizerFast",
- "transformers.models.t5.modeling_t5.T5EncoderModel",
- "langvae.encoders.sentence_annotated.AnnotatedSentenceEncoder",
- "transformers.models.t5.modeling_t5.T5DenseGatedActDense",
- "torch._utils._rebuild_tensor_v2",
- "transformers.models.t5.modeling_t5.T5LayerSelfAttention",
- "torch.nn.modules.container.ModuleList",
- "transformers.models.t5.modeling_t5.T5Block",
- "tokenizers.Tokenizer",
- "collections.OrderedDict",
- "transformers.models.t5.configuration_t5.T5Config",
- "transformers.models.t5.modeling_t5.T5LayerFF",
- "transformers.models.t5.modeling_t5.T5Stack",
- "torch.nn.modules.dropout.Dropout",
- "torch.nn.modules.linear.Linear",
- "tokenizers.AddedToken",
- "transformers.models.t5.tokenization_t5_fast.T5TokenizerFast"
How to fix it?
446 MB
Uploading VAE in neuro-symbolic-ai/eb-langcvae-flan_t5-llama3_8b
-
55 Bytes
Uploading VAE in neuro-symbolic-ai/eb-langcvae-flan_t5-llama3_8b
-
17.1 GB
Uploading VAE in neuro-symbolic-ai/eb-langcvae-flan_t5-llama3_8b
-
158 Bytes
Uploading VAE in neuro-symbolic-ai/eb-langcvae-flan_t5-llama3_8b