sailor2-3b-chat / runner.sh
yusufs's picture
fix(float16): Bfloat16 is only supported on GPUs with compute capability of at least 8.0. Your Tesla T4 GPU has compute capability 7.5. You can use float16 instead by explicitly setting the`dtype` flag in CLI, for example: --dtype=half.
78963b9
raw
history blame
2.46 kB
#!/bin/bash
# Validate MODEL_ID
if [[ -z "$MODEL_ID" ]]; then
echo "Error: MODEL_ID is not set."
exit 1
fi
# Assign MODEL_NAME and MODEL_REV based on MODEL_ID
case "$MODEL_ID" in
1)
MODEL_NAME="meta-llama/Llama-3.2-3B-Instruct"
MODEL_REV="0cb88a4f764b7a12671c53f0838cd831a0843b95"
;;
2)
MODEL_NAME="sail/Sailor2-3B-Chat"
MODEL_REV="d60722644e700133576489719dcbc288036628d5"
;;
*)
echo "Error: Invalid MODEL_ID. Valid values are 1 or 2."
exit 1
;;
esac
printf "Running %s using vLLM OpenAI compatible API Server at port %s\n" $MODEL_NAME "7860"
# https://medium.com/geekculture/the-story-behind-random-seed-42-in-machine-learning-b838c4ac290a
#[Seven and a half million years later…. Fook and Lunkwill are long gone, but their descendants continue what they started]
# “All right,” said Deep Thought. “The Answer to the Great Question…”
# “Yes..!”
# “Of Life, the Universe and Everything…” said Deep Thought.
# “Yes…!”
# “Is…” said Deep Thought, and paused.
# “Yes…!”
# “Is…”
# “Yes…!!!…?”
# “Forty-two,” said Deep Thought, with infinite majesty and calm.”
#
# ―Douglas Adams, The Hitchhiker’s Guide to the Galaxy
#
#
# For sail/Sailor-4B-Chat if we only need 26576 token, it can be run using hardware with lower specs:
# Nvidia 1xL4 8 vCPU • 30 GB RAM • 24 GB VRAM (US$ 0.80/hour or per month assuming 720 hours is US$ 576)
# Larger token size requires more VRAM, for example 32768 token requires minimum:
# Nvidia 1xL40S 8 vCPU • 62 GB RAM • 48 GB VRAM (US$ 1.80/hour or per month assuming 720 hours is US$ 1.296)
#
# For meta-llama/Llama-3.2-3B-Instruct if we only need 32768 token, it can be run using hardware with lower specs:
# Nvidia T4 small 4 vCPU · 15 GB RAM · 16 GB VRAM (US$ 0.40/hour or per month assuming 720 hours is US$ 288)
# Run the Python script with the determined values
# Supported tasks: {'generate', 'embedding'}
python -u /app/openai_compatible_api_server.py \
--model "${MODEL_NAME}" \
--task generate \
--revision "${MODEL_REV}" \
--code-revision "${MODEL_REV}" \
--tokenizer-revision "${MODEL_REV}" \
--seed 42 \
--host 0.0.0.0 \
--port 7860 \
--max-num-batched-tokens 32768 \
--max-model-len 32768 \
--dtype float16 \
--enforce-eager \
--gpu-memory-utilization 0.9 \
--enable-prefix-caching \
--disable-log-requests \
--trust-remote-code