arielnlee commited on
Commit
a06dc5e
1 Parent(s): 9c2bb9b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -13,7 +13,7 @@ metrics:
13
 
14
  # Information
15
 
16
- GPlatty-30B is a merge of [garage-bAIdnd/Platypus-30B](https://huggingface.co/lilloukas/Platypus-30B) and [chansung/gpt4-alpaca-lora-30b](https://huggingface.co/chansung/gpt4-alpaca-lora-30b)
17
 
18
  | Metric | Value |
19
  |-----------------------|-------|
@@ -51,22 +51,22 @@ Each task was evaluated on a single A100 80GB GPU.
51
 
52
  ARC:
53
  ```
54
- python main.py --model hf-causal-experimental --model_args pretrained=garage-bAIdnd/GPlatty-30B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/arc_challenge_25shot.json --device cuda --num_fewshot 25
55
  ```
56
 
57
  HellaSwag:
58
  ```
59
- python main.py --model hf-causal-experimental --model_args pretrained=garage-bAIdnd/GPlatty-30B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/hellaswag_10shot.json --device cuda --num_fewshot 10
60
  ```
61
 
62
  MMLU:
63
  ```
64
- python main.py --model hf-causal-experimental --model_args pretrained=garage-bAIdnd/GPlatty-30B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/mmlu_5shot.json --device cuda --num_fewshot 5
65
  ```
66
 
67
  TruthfulQA:
68
  ```
69
- python main.py --model hf-causal-experimental --model_args pretrained=garage-bAIdnd/GPlatty-30B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/truthfulqa_0shot.json --device cuda
70
  ```
71
  ## Limitations and bias
72
 
 
13
 
14
  # Information
15
 
16
+ GPlatty-30B is a merge of [garage-bAInd/Platypus-30B](https://huggingface.co/lilloukas/Platypus-30B) and [chansung/gpt4-alpaca-lora-30b](https://huggingface.co/chansung/gpt4-alpaca-lora-30b)
17
 
18
  | Metric | Value |
19
  |-----------------------|-------|
 
51
 
52
  ARC:
53
  ```
54
+ python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/GPlatty-30B --tasks arc_challenge --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/arc_challenge_25shot.json --device cuda --num_fewshot 25
55
  ```
56
 
57
  HellaSwag:
58
  ```
59
+ python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/GPlatty-30B --tasks hellaswag --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/hellaswag_10shot.json --device cuda --num_fewshot 10
60
  ```
61
 
62
  MMLU:
63
  ```
64
+ python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/GPlatty-30B --tasks hendrycksTest-* --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/mmlu_5shot.json --device cuda --num_fewshot 5
65
  ```
66
 
67
  TruthfulQA:
68
  ```
69
+ python main.py --model hf-causal-experimental --model_args pretrained=garage-bAInd/GPlatty-30B --tasks truthfulqa_mc --batch_size 1 --no_cache --write_out --output_path results/Platypus-30B/truthfulqa_0shot.json --device cuda
70
  ```
71
  ## Limitations and bias
72