vllm (pretrained=/root/autodl-tmp/Phi-3-medium-4k-instruct-abliterated-v3,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,gpu_memory_utilization=0.80,max_num_seqs=2,enforce_eager=True), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: 1

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match ↑ 0.804 ± 0.0252
strict-match 5 exact_match ↑ 0.728 ± 0.0282

vllm (pretrained=/root/autodl-tmp/output0.85,add_bos_token=true,tensor_parallel_size=2,max_model_len=2048,gpu_memory_utilization=0.80,max_num_seqs=5), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: 5

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match ↑ 0.812 ± 0.0248
strict-match 5 exact_match ↑ 0.764 ± 0.0269
Downloads last month
10
Safetensors
Model size
14B params
Tensor type
BF16
·
I8
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for noneUsername/Phi-3-medium-4k-instruct-abliterated-v3-W8A8-Dynamic-Per-Token

Finetuned
(1)
this model