sepiatone commited on
Commit
652aac7
·
verified ·
1 Parent(s): 169237a

end of finetuning

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,58 +1,57 @@
1
  ---
2
  base_model: meta-llama/Llama-3.2-3B-Instruct
3
- library_name: peft
4
- license: llama3.2
5
  tags:
 
6
  - trl
7
  - sft
8
- - generated_from_trainer
9
- model-index:
10
- - name: llama-3.2-3b-sft-indicqa-ml-v0.1
11
- results: []
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
-
17
- # llama-3.2-3b-sft-indicqa-ml-v0.1
18
-
19
- This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
20
 
21
- ## Model description
 
22
 
23
- More information needed
24
 
25
- ## Intended uses & limitations
 
26
 
27
- More information needed
 
 
 
 
28
 
29
- ## Training and evaluation data
30
 
31
- More information needed
32
 
33
- ## Training procedure
34
 
35
- ### Training hyperparameters
36
 
37
- The following hyperparameters were used during training:
38
- - learning_rate: 0.0002
39
- - train_batch_size: 4
40
- - eval_batch_size: 8
41
- - seed: 42
42
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
- - lr_scheduler_type: cosine
44
- - lr_scheduler_warmup_ratio: 0.03
45
- - num_epochs: 1
46
- - mixed_precision_training: Native AMP
47
 
48
- ### Training results
 
 
 
 
49
 
 
50
 
51
 
52
- ### Framework versions
53
 
54
- - PEFT 0.13.2
55
- - Transformers 4.44.2
56
- - Pytorch 2.5.0+cu121
57
- - Datasets 3.1.0
58
- - Tokenizers 0.19.1
 
 
 
 
 
 
 
 
1
  ---
2
  base_model: meta-llama/Llama-3.2-3B-Instruct
3
+ library_name: transformers
4
+ model_name: llama-3.2-3b-sft-indicqa-ml-v0.1
5
  tags:
6
+ - generated_from_trainer
7
  - trl
8
  - sft
9
+ licence: license
 
 
 
10
  ---
11
 
12
+ # Model Card for llama-3.2-3b-sft-indicqa-ml-v0.1
 
 
 
 
 
13
 
14
+ This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
+ ## Quick start
18
 
19
+ ```python
20
+ from transformers import pipeline
21
 
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="sepiatone/llama-3.2-3b-sft-indicqa-ml", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
27
 
28
+ ## Training procedure
29
 
 
30
 
 
31
 
32
+ This model was trained with SFT.
33
 
34
+ ### Framework versions
 
 
 
 
 
 
 
 
 
35
 
36
+ - TRL: 0.12.0
37
+ - Transformers: 4.46.1
38
+ - Pytorch: 2.5.0+cu121
39
+ - Datasets: 3.1.0
40
+ - Tokenizers: 0.20.1
41
 
42
+ ## Citations
43
 
44
 
 
45
 
46
+ Cite TRL as:
47
+
48
+ ```bibtex
49
+ @misc{vonwerra2022trl,
50
+ title = {{TRL: Transformer Reinforcement Learning}},
51
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
52
+ year = 2020,
53
+ journal = {GitHub repository},
54
+ publisher = {GitHub},
55
+ howpublished = {\url{https://github.com/huggingface/trl}}
56
+ }
57
+ ```
adapter_config.json CHANGED
@@ -21,12 +21,12 @@
21
  "revision": null,
22
  "target_modules": [
23
  "q_proj",
 
24
  "gate_proj",
 
25
  "up_proj",
26
- "v_proj",
27
- "k_proj",
28
  "o_proj",
29
- "down_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
21
  "revision": null,
22
  "target_modules": [
23
  "q_proj",
24
+ "k_proj",
25
  "gate_proj",
26
+ "down_proj",
27
  "up_proj",
 
 
28
  "o_proj",
29
+ "v_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8cecdb346932b342c411df291f651f78a4b3293e21a8ebab0ca9302602869454
3
  size 97307544
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b064db53470ac13a5a228e15f58ed400c29008c3510870016bf05ad09068dda9
3
  size 97307544
runs/Nov04_10-52-04_23a4df52558f/events.out.tfevents.1730717567.23a4df52558f.653.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a97ba4702b16abaa59476205d5ed098a1ef028e38c770737f03ed60ae5e6070c
3
+ size 6110
runs/Nov04_11-20-04_23a4df52558f/events.out.tfevents.1730719205.23a4df52558f.8633.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d872d30c21b0598b659576ae6b2a4bba76cfde9ecd58278ac780592bff9f2c31
3
+ size 4184
runs/Nov04_11-23-10_23a4df52558f/events.out.tfevents.1730719391.23a4df52558f.9568.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf666c52220cd655ec0dbd054d37b59b1ece68eceed50e7a53d806f4b64e96b2
3
+ size 4184
tokenizer.json CHANGED
The diff for this file is too large to render. See raw diff
 
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:052e1d5b4b6afcd8169ed69ba6248c9ec4771619b8a1bf12b710cb5ef84da4be
3
  size 5496
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20a886305a79131567138cb4e09316a6d2b3113a1a2b627585bdfbf922ebe656
3
  size 5496