Angainor commited on
Commit
2d789f7
·
1 Parent(s): 264a905

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -3
README.md CHANGED
@@ -1,3 +1,24 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This repo contains a low-rank adapter for LLaMA-13b fit on the Stanford Alpaca dataset.
2
+
3
+ This version of the weights was trained on dual RTX3090 with the following hyperparameters:
4
+
5
+ Epochs: 10
6
+ Batch size: 128
7
+ Cutoff length: 256
8
+ Learning rate: 3e-4
9
+ Lora r: 16
10
+ Lora alpha: 16
11
+ Lora target modules: q_proj, k_proj, v_proj, o_proj
12
+ That is:
13
+
14
+ OMP_NUM_THREADS=4 WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py \
15
+ --base_model='decapoda-research/llama-13b-hf' \
16
+ --data_path="yahma/alpaca-cleaned' \
17
+ --num_epochs=10 \
18
+ --output_dir='./lora-alpaca-13b-256-qkvo' \
19
+ --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \
20
+ --lora_r=16 \
21
+ --val_set_size=0 \
22
+ --micro_batch_size=32
23
+
24
+ Instructions for running it can be found at https://github.com/tloen/alpaca-lora.