caffeinatedcherrychic commited on
Commit
b8d717d
1 Parent(s): 81352ea

Upload 7 files

Browse files
Files changed (3) hide show
  1. README.md +3 -3
  2. adapter_config.json +4 -4
  3. adapter_model.bin +1 -1
README.md CHANGED
@@ -21,7 +21,7 @@ base_model: NousResearch/Llama-2-7b-hf
21
  bf16: false
22
  dataset_prepared_path: null
23
  datasets:
24
- - path: caffeinatedcherrychic/test-dataset-io
25
  type: alpaca
26
  debug: null
27
  deepspeed: null
@@ -83,7 +83,7 @@ xformers_attention: null
83
 
84
  This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the None dataset.
85
  It achieves the following results on the evaluation set:
86
- - Loss: 0.7530
87
 
88
  ## Model description
89
 
@@ -116,7 +116,7 @@ The following hyperparameters were used during training:
116
 
117
  | Training Loss | Epoch | Step | Validation Loss |
118
  |:-------------:|:-----:|:----:|:---------------:|
119
- | 0.8802 | 0.0 | 20 | 0.7530 |
120
 
121
 
122
  ### Framework versions
 
21
  bf16: false
22
  dataset_prepared_path: null
23
  datasets:
24
+ - path: caffeinatedcherrychic/cidds-agg-alpaca-iio
25
  type: alpaca
26
  debug: null
27
  deepspeed: null
 
83
 
84
  This model is a fine-tuned version of [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) on the None dataset.
85
  It achieves the following results on the evaluation set:
86
+ - Loss: 0.6372
87
 
88
  ## Model description
89
 
 
116
 
117
  | Training Loss | Epoch | Step | Validation Loss |
118
  |:-------------:|:-----:|:----:|:---------------:|
119
+ | 0.7917 | 0.0 | 20 | 0.6372 |
120
 
121
 
122
  ### Framework versions
adapter_config.json CHANGED
@@ -20,13 +20,13 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "o_proj",
24
- "up_proj",
25
  "q_proj",
 
 
26
  "down_proj",
27
  "v_proj",
28
- "gate_proj",
29
- "k_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "k_proj",
 
24
  "q_proj",
25
+ "up_proj",
26
+ "o_proj",
27
  "down_proj",
28
  "v_proj",
29
+ "gate_proj"
 
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fe0cd794280ba4be814605b8e350c940212b9825d39954d62b6e4429064869c7
3
  size 319977674
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fe6016748a4649511c7cb4545e38323bffc0ff088bb42b33e6373b9a5be77d3
3
  size 319977674