Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
yusufs
/
sailor2-3b-chat
like
0
Paused
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
main
sailor2-3b-chat
Ctrl+K
Ctrl+K
1 contributor
History:
60 commits
yusufs
fix(float16): Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your Tesla T4 GPU has compute capability 7.5. You can use float16 instead by explicitly setting the`dtype` flag in CLI, for example: --dtype=half.
78963b9
9 days ago
.gitignore
Safe
19 Bytes
feat(download_model.py): remove download_model.py during build, it causing big image size
5 months ago
Dockerfile
Safe
1.38 kB
fix(using sail/Sailor2-3B-Chat): sail/Sailor2-3B-Chat
9 days ago
README.md
Safe
1.73 kB
feat(add-model): always download model during build, it will be cached in the consecutive builds
5 months ago
download_model.py
Safe
700 Bytes
feat(add-model): always download model during build, it will be cached in the consecutive builds
5 months ago
main.py
Safe
6.7 kB
feat(parse): parse output
5 months ago
openai_compatible_api_server.py
Safe
24.4 kB
feat(dep_sizes.txt): removes dep_sizes.txt during build, it not needed
5 months ago
poetry.lock
Safe
426 kB
feat(refactor): move the files to root
5 months ago
pyproject.toml
Safe
416 Bytes
feat(refactor): move the files to root
5 months ago
requirements.txt
Safe
9.99 kB
feat(first-commit): follow examples and tutorials
5 months ago
run-llama.sh
Safe
1.51 kB
fix(runner.sh): --enforce-eager not support values
3 months ago
run-sailor.sh
Safe
1.83 kB
fix(runner.sh): --enforce-eager not support values
3 months ago
runner.sh
Safe
2.46 kB
fix(float16): Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your Tesla T4 GPU has compute capability 7.5. You can use float16 instead by explicitly setting the`dtype` flag in CLI, for example: --dtype=half.
9 days ago