File size: 2,659 Bytes
3ded857
 
 
314572d
3ded857
 
 
 
 
 
 
314572d
 
 
3ded857
 
63b8f71
ca88d56
3ded857
314572d
3ded857
314572d
63b8f71
314572d
 
 
3ded857
314572d
 
 
 
63b8f71
314572d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
language:
- ko
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
datasets:
- wikimedia/wikipedia
- FreedomIntelligence/alpaca-gpt4-korean
---

# unsloth/Meta-Llama-3.1-8B-bnb-4bit fine tuning after Continued Pretraining
# (TREX-Lab at Seoul Cyber University)

<!-- Provide a quick summary of what the model is/does. -->

## Summary
  - Base Model : unsloth/Meta-Llama-3.1-8B-bnb-4bit
  - Dataset : wikimedia/wikipedia(Continued Pretraining), FreedomIntelligence/alpaca-gpt4-korean(FineTuning)
  - This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
  - Test whether fine tuning of a large language model is possible on A30 GPU*1 (successful)

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** [TREX-Lab at Seoul Cyber University]
- **Language(s) (NLP):** [Korean]
- **Finetuned from model :** [unsloth/Meta-Llama-3.1-8B-bnb-4bit]

## Continued Pretraining
```
  warmup_steps = 10
  learning_rate = 5e-5
  embedding_learning_rate = 1e-5
  bf16 = True
  optim = "adamw_8bit"
  weight_decay = 0.01
  lr_scheduler_type = "linear"
```

```
  loss : 1.171600
```

## Fine Tuning Detail
```
  warmup_steps = 10
  learning_rate = 5e-5
  embedding_learning_rate = 1e-5
  bf16 = True
  optim = "adamw_8bit"
  weight_decay = 0.001
  lr_scheduler_type = "linear"
```
```
  loss : 0.699600
```

## Usage #1
```
  # Prompt
  model_prompt = """λ‹€μŒμ€ μž‘μ—…μ„ μ„€λͺ…ν•˜λŠ” λͺ…λ Ήμž…λ‹ˆλ‹€. μš”μ²­μ„ μ μ ˆν•˜κ²Œ μ™„λ£Œν•˜λŠ” 응닡을 μž‘μ„±ν•˜μ„Έμš”.
  
  ### 지침:
  {}
  
  ### 응닡:
  {}"""
  
  FastLanguageModel.for_inference(model)
  inputs = tokenizer(
  [
      model_prompt.format(
          "μ΄μˆœμ‹  μž₯ꡰ은 λˆ„κ΅¬μΈκ°€μš” ? μžμ„Έν•˜κ²Œ μ•Œλ €μ£Όμ„Έμš”.",
          "",
      )
  ], return_tensors = "pt").to("cuda")
  
  outputs = model.generate(**inputs, max_new_tokens = 128, use_cache = True)
  tokenizer.batch_decode(outputs)
```

## Usage #2
```
  from transformers import TextStreamer

  # Prompt
  model_prompt = """λ‹€μŒμ€ μž‘μ—…μ„ μ„€λͺ…ν•˜λŠ” λͺ…λ Ήμž…λ‹ˆλ‹€. μš”μ²­μ„ μ μ ˆν•˜κ²Œ μ™„λ£Œν•˜λŠ” 응닡을 μž‘μ„±ν•˜μ„Έμš”.
  
  ### 지침:
  {}
  
  ### 응닡:
  {}"""
  
  FastLanguageModel.for_inference(model)
  inputs = tokenizer(
  [
      model_prompt.format(
          "지ꡬλ₯Ό κ΄‘λ²”μœ„ν•˜κ²Œ μ„€λͺ…ν•˜μ„Έμš”.",
          "",
      )
  ], return_tensors = "pt").to("cuda")
  
  text_streamer = TextStreamer(tokenizer)
  value = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 128, repetition_penalty = 0.1)
```