Undi95 commited on
Commit
e092078
·
1 Parent(s): 1c640a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -8
README.md CHANGED
@@ -2,31 +2,35 @@
2
  tags:
3
  - generated_from_trainer
4
  model-index:
5
- - name: lora-out
6
  results: []
 
7
  ---
8
 
9
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
  should probably proofread and complete it, then remove this comment. -->
11
 
12
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
13
- # lora-out
14
 
15
- This model was trained from scratch on the None dataset.
16
  It achieves the following results on the evaluation set:
17
  - Loss: 1.6087
18
 
19
  ## Model description
20
 
21
- More information needed
22
 
23
- ## Intended uses & limitations
24
 
25
- More information needed
 
26
 
27
- ## Training and evaluation data
 
 
 
28
 
29
- More information needed
30
 
31
  ## Training procedure
32
 
@@ -74,3 +78,5 @@ The following hyperparameters were used during training:
74
  - Pytorch 2.0.1+cu117
75
  - Datasets 2.14.6
76
  - Tokenizers 0.14.1
 
 
 
2
  tags:
3
  - generated_from_trainer
4
  model-index:
5
+ - name: no_robots-alpaca
6
  results: []
7
+ license: cc-by-nc-4.0
8
  ---
9
 
10
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
11
  should probably proofread and complete it, then remove this comment. -->
12
 
13
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
14
+ # no_robots-alpaca
15
 
16
+ This lora was trained from scratch with the [Doctor-Shotgun/no-robots-sharegpt](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt) dataset on [https://huggingface.co/TheBloke/Llama-2-13B-fp16](TheBloke/Llama-2-13B-fp16).
17
  It achieves the following results on the evaluation set:
18
  - Loss: 1.6087
19
 
20
  ## Model description
21
 
22
+ The LoRA was trained from the [Doctor-Shotgun/no-robots-sharegpt](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt), a ShareGPT converted dataset from the OG [HuggingFaceH4/no_robots](https://huggingface.co/datasets/HuggingFaceH4/no_robots) but with Alpaca prompting.
23
 
24
+ ## Prompt template: Alpaca
25
 
26
+ ```
27
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
28
 
29
+ ### Instruction:
30
+ {prompt}
31
+
32
+ ### Response:
33
 
 
34
 
35
  ## Training procedure
36
 
 
78
  - Pytorch 2.0.1+cu117
79
  - Datasets 2.14.6
80
  - Tokenizers 0.14.1
81
+
82
+ If you want to support me, you can [here](https://ko-fi.com/undiai).