nRuaif
commited on
Commit
·
9ac21b7
1
Parent(s):
051f046
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: creativeml-openrail-m
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
+
---
|
7 |
+
##
|
8 |
+
|
9 |
+
|
10 |
+
|
11 |
+
## Model Details
|
12 |
+
|
13 |
+
### Model Description
|
14 |
+
|
15 |
+
<!-- Provide a longer summary of what this model is. -->
|
16 |
+
|
17 |
+
|
18 |
+
|
19 |
+
- **Developed by:** nRuaif
|
20 |
+
- **Model type:** large language model
|
21 |
+
- **License:**
|
22 |
+
- **Finetuned from model [optional]:** Llama-70B
|
23 |
+
### Model Sources [optional]
|
24 |
+
|
25 |
+
|
26 |
+
## Uses
|
27 |
+
|
28 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
29 |
+
The model uses Fastchat/ShareGPT format.
|
30 |
+
|
31 |
+
|
32 |
+
### Direct Use
|
33 |
+
|
34 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
35 |
+
|
36 |
+
This model is finetuned for normal and erotic roleplay while can still an assistant. (Might not be a helpfull one through)
|
37 |
+
|
38 |
+
|
39 |
+
|
40 |
+
### Out-of-Scope Use
|
41 |
+
|
42 |
+
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
43 |
+
Do anything you want. I don't care
|
44 |
+
|
45 |
+
|
46 |
+
## Bias, Risks, and Limitations
|
47 |
+
|
48 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
49 |
+
|
50 |
+
Model might have bias to NSFW due to the large % of NSFW data in the training set.
|
51 |
+
|
52 |
+
|
53 |
+
|
54 |
+
|
55 |
+
|
56 |
+
|
57 |
+
|
58 |
+
## Training Details
|
59 |
+
|
60 |
+
### Training Data
|
61 |
+
|
62 |
+
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
63 |
+
|
64 |
+
|
65 |
+
3000 convos with 4090 cut off len.
|
66 |
+
|
67 |
+
### Training Procedure
|
68 |
+
|
69 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
70 |
+
|
71 |
+
|
72 |
+
|
73 |
+
|
74 |
+
#### Training Hyperparameters
|
75 |
+
|
76 |
+
- **Training regime:** BF16, QLoRA, constant LR 5e-5 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
77 |
+
|
78 |
+
|
79 |
+
|
80 |
+
|
81 |
+
|
82 |
+
|
83 |
+
|
84 |
+
### Compute Infrastructure
|
85 |
+
|
86 |
+
The model is trained on 1 A100 for 10 hours on runpod.
|
87 |
+
|
88 |
+
|