Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
hieunguyen1053
/
dump
like
0
Model card
Files
Files and versions
Community
main
dump
1 contributor
History:
26 commits
hieunguyen1053
Upload data-00000-of-00001.arrow with huggingface_hub
5da77c3
verified
20 days ago
.gitattributes
1.61 kB
Upload libstdc++.so.6 with huggingface_hub
about 2 months ago
averaged_perceptron_tagger_eng.zip
pickle
Pickle imports
No problematic imports detected
What is a pickle import?
1.54 MB
LFS
Upload averaged_perceptron_tagger_eng.zip with huggingface_hub
about 1 month ago
bge-m3.tar.gz
1.05 GB
LFS
Upload bge-m3.tar.gz with huggingface_hub
about 2 months ago
data-00000-of-00001.arrow
334 MB
LFS
Upload data-00000-of-00001.arrow with huggingface_hub
20 days ago
data-desc-00000-of-00001.arrow
170 MB
LFS
Upload data-desc-00000-of-00001.arrow with huggingface_hub
about 2 months ago
form.tar.gz
401 MB
LFS
Upload form.tar.gz with huggingface_hub
about 2 months ago
gemma2.tar.gz
5.18 GB
LFS
Upload gemma2.tar.gz with huggingface_hub
about 2 months ago
gliner.zip
pickle
Pickle imports
No problematic imports detected
What is a pickle import?
925 MB
LFS
Upload gliner.zip with huggingface_hub
about 2 months ago
libstdc++.so.6
18 MB
LFS
Upload libstdc++.so.6 with huggingface_hub
about 2 months ago
llama3.1.tar.gz
4.84 GB
LFS
Upload llama3.1.tar.gz with huggingface_hub
about 1 month ago
llct.vn.zip
pickle
Pickle imports
No problematic imports detected
What is a pickle import?
15.4 MB
LFS
Upload llct.vn.zip with huggingface_hub
about 1 month ago
lyluanchinhtri_files.zip
pickle
Pickle imports
No problematic imports detected
What is a pickle import?
1.91 GB
LFS
Upload lyluanchinhtri_files.zip with huggingface_hub
about 1 month ago
ncht.zip
pickle
Pickle imports
No problematic imports detected
What is a pickle import?
4.69 MB
LFS
Upload ncht.zip with huggingface_hub
about 1 month ago
ollama
64 MB
LFS
Upload ollama with huggingface_hub
about 1 month ago
ollama_so.tar.gz
1.93 GB
LFS
Upload ollama_so.tar.gz with huggingface_hub
about 2 months ago
punkt.zip
pickle
Detected Pickle imports (300)
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"collections.defaultdict"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"nltk.tokenize.punkt.PunktToken"
,
"builtins.int"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"builtins.set"
,
"nltk.tokenize.punkt.PunktParameters"
,
"__builtin__.set"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktParameters"
,
"__builtin__.long"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"collections.defaultdict"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"__builtin__.object"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"copy_reg._reconstructor"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
,
"collections.defaultdict"
,
"__builtin__.set"
,
"nltk.tokenize.punkt.PunktToken"
,
"nltk.tokenize.punkt.PunktLanguageVars"
,
"nltk.tokenize.punkt.PunktSentenceTokenizer"
,
"__builtin__.int"
,
"nltk.tokenize.punkt.PunktParameters"
How to fix it?
13.9 MB
LFS
Upload punkt.zip with huggingface_hub
about 1 month ago
punkt_tab.zip
pickle
Pickle imports
No problematic imports detected
What is a pickle import?
4.26 MB
LFS
Upload punkt_tab.zip with huggingface_hub
about 1 month ago
state.json
247 Bytes
Upload state.json with huggingface_hub
about 1 month ago
tacpham.rar
2.26 GB
LFS
Upload tacpham.rar with huggingface_hub
about 1 month ago
tacpham.zip
pickle
Pickle imports
No problematic imports detected
What is a pickle import?
2.67 GB
LFS
Upload tacpham.zip with huggingface_hub
about 1 month ago
van_kien_chinh_sach_moi_DCS.rar
78.3 MB
LFS
Upload van_kien_chinh_sach_moi_DCS.rar with huggingface_hub
about 1 month ago
vectordb.zip
pickle
Detected Pickle imports (2)
"langchain_community.docstore.in_memory.InMemoryDocstore"
,
"langchain_core.documents.base.Document"
How to fix it?
55.5 MB
LFS
Upload vectordb.zip with huggingface_hub
about 1 month ago