Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

laion
/
CLIP-ViT-H-14-laion2B-s32B-b79K

Zero-Shot Image Classification
OpenCLIP
PyTorch
Safetensors
clip
Model card Files Files and versions Community
13
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

What's the meaning of "s32B" and "b79K" in CLIP-ViT-H-14-laion2B-s32B-b79K ?

1
#13 opened about 1 year ago by
xieyang233

Different result between model space and local deployment

4
#12 opened over 1 year ago by
jeff-lee

Extracting `text_encoder` from `ViT-H-14` using `open_clip_torch`?

1
#9 opened over 1 year ago by
Chanuhf

What is the difference between 'open_clip_pytorch_model.bin' and 'pytorch_model.bin'?

1
#8 opened almost 2 years ago by
buaadwxl

Request: DOI

2
#7 opened almost 2 years ago by
Ab0715

make a space please

#5 opened over 2 years ago by
micole66

Make the model to load automatically without waiting

5
#3 opened over 2 years ago by
micole66

`model_max_length` might be missing from the `tokenizer_config.json`

2
#2 opened over 2 years ago by
fischcheng
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs