chrisdono commited on
Commit
b9161b5
1 Parent(s): 099d41e

formatting update

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -7,6 +7,7 @@ To get the training to work on the 2 GPUs (utilize both GPUS simultaneously), th
7
  WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model 'decapoda-research/llama-7b-hf' --data_path 'yahma/alpaca-cleaned' --output_dir './lora-alpaca' --num_epochs 1 --micro_batch_size 8
8
 
9
  Note 1. Micro batch size was increased from the default 4 to 8. Note that increasing it further is possible based on other training that has been performed. This was a first attempt.
 
10
  Note 2. Output directory was initially lora-alpaca and then contents were moved to new folder when initializing git repository.
11
 
12
 
 
7
  WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node=2 --master_port=1234 finetune.py --base_model 'decapoda-research/llama-7b-hf' --data_path 'yahma/alpaca-cleaned' --output_dir './lora-alpaca' --num_epochs 1 --micro_batch_size 8
8
 
9
  Note 1. Micro batch size was increased from the default 4 to 8. Note that increasing it further is possible based on other training that has been performed. This was a first attempt.
10
+
11
  Note 2. Output directory was initially lora-alpaca and then contents were moved to new folder when initializing git repository.
12
 
13