Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
HamidRezaAttar
/
gpt2-product-description-generator
like
14
Text Generation
Transformers
PyTorch
English
gpt2
text-generation-inference
Inference Endpoints
arxiv:
1706.03762
License:
apache-2.0
Model card
Files
Files and versions
Community
2
Train
Deploy
Use this model
main
gpt2-product-description-generator
1 contributor
History:
12 commits
This model has 1 file scanned as unsafe.
Show
files
HamidRezaAttar
Update README.md
ce23aa7
verified
7 months ago
.gitattributes
Safe
1.26 kB
First model version
almost 3 years ago
.gitignore
Safe
13 Bytes
UPDATE .gitignore
almost 3 years ago
README.md
Safe
2.53 kB
Update README.md
7 months ago
config.json
Safe
907 Bytes
First model version
almost 3 years ago
dls_eng_Batch4.pkl
Unsafe
pickle
Detected Pickle imports (28)
"fastcore.transform.Pipeline"
,
"tokenizers.Tokenizer"
,
"torch.device"
,
"__builtin__.getattr"
,
"pathlib.PosixPath"
,
"numpy.dtype"
,
"fastai.text.data.LMTensorText"
,
"__builtin__.unicode"
,
"fastai.data.core.DataLoaders"
,
"fastai.text.data._maybe_first"
,
"fastai.text.data.LMDataLoader"
,
"random.Random"
,
"fastcore.imports.noop"
,
"fastai.data.core.TfmdLists"
,
"_codecs.encode"
,
"numpy.core.multiarray._reconstruct"
,
"__main__.TransformersTokenizer"
,
"__builtin__.tuple"
,
"fastai.torch_core.Chunks"
,
"tokenizers.models.Model"
,
"transformers.models.gpt2.tokenization_gpt2_fast.GPT2TokenizerFast"
,
"torch.Tensor"
,
"fastcore.foundation.L"
,
"fastcore.xtras.ReindexCollection"
,
"fastai.data.load._wif"
,
"numpy.core.multiarray.scalar"
,
"fastai.data.load._FakeLoader"
,
"numpy.ndarray"
How to fix it?
138 MB
LFS
First model version
almost 3 years ago
history_epoch1.csv
Safe
382 Bytes
LFS
First model version
almost 3 years ago
pytorch_model.bin
Safe
pickle
Detected Pickle imports (4)
"collections.OrderedDict"
,
"torch.ByteStorage"
,
"torch._utils._rebuild_tensor_v2"
,
"torch.FloatStorage"
What is a pickle import?
510 MB
LFS
First model version
almost 3 years ago
tokenizer.json
Safe
1.36 MB
add tokenizer.json
almost 3 years ago