File size: 955 Bytes
cb88b3f
 
 
 
 
 
 
 
5240c6d
 
 
 
cb88b3f
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
datasets:
- jeremyc/Alpaca-Lora-GPT4-Swedish
language:
- en
- sv
---

## Alpaca-Lora-Swe 7B

Alpaca-Lora-Swe-7b is a LLaMA-7B model fine-tuned on the translated [Stanford Alpaca](https://github.com/tatsu-lab/stanford_alpaca) dataset to follow the 🇸🇪 Swedish instructions

This model was trained for 5 epoch with a combined dataset of english + swedish and the original Alpca-Lora prompt using the following command:
```
python3 finetune.py --base_model='./llama-7b'  --output_dir='./lora-alpaca' --resume_from_checkpoint true --micro_batch_size=14 --num_epochs=3 --cutoff_len=512 --group_by_length --output_dir='./lora-alpaca-swe'  --data_path='alpaca_gpt4_combined.json'  --num_epochs 5  --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]'

```

The training run logs are available at https://wandb.ai/jeremy-cochoy/huggingface/runs/896ntg42

For more information, please visit the Github repo: https://github.com/jeremycochoy/alpaca-lora-swe