Triangle104 commited on
Commit
66b33a7
1 Parent(s): baaeaaf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md CHANGED
@@ -54,6 +54,82 @@ tags:
54
  This model was converted to GGUF format from [`PocketDoc/Dans-PersonalityEngine-v1.0.0-8b`](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-v1.0.0-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
55
  Refer to the [original model card](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-v1.0.0-8b) for more details on the model.
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  ## Use with llama.cpp
58
  Install llama.cpp through brew (works on Mac and Linux)
59
 
 
54
  This model was converted to GGUF format from [`PocketDoc/Dans-PersonalityEngine-v1.0.0-8b`](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-v1.0.0-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
55
  Refer to the [original model card](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-v1.0.0-8b) for more details on the model.
56
 
57
+ ---
58
+ Model details:
59
+ -
60
+ What is it?
61
+
62
+ This model is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline. It has been trained on a wide array of one shot instructions, multi turn instructions, role playing scenarios, text adventure games, co-writing, and much more. The full dataset is publicly available and can be found in the datasets section of the model page.
63
+
64
+ There has not been any form of harmfulness alignment done on this model, please take the appropriate precautions when using it in a production environment.
65
+ Prompting
66
+
67
+ The model has been trained on standard "ChatML" format prompting, an example of which is shown below:
68
+
69
+ <|im_start|>system
70
+ system prompt<|im_end|>
71
+ <|im_start|>user
72
+ Hi there!<|im_end|>
73
+ <|im_start|>assistant
74
+ Nice to meet you!<|im_end|>
75
+ <|im_start|>user
76
+ Can I ask a question?<|im_end|>
77
+ <|im_start|>assistant
78
+
79
+ SillyTavern templates
80
+
81
+ Below are Instruct and Context templates for use within SillyTavern.
82
+ context template
83
+
84
+ {
85
+ "story_string": "<|im_start|>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<|im_end|>\n",
86
+ "example_separator": "",
87
+ "chat_start": "",
88
+ "use_stop_strings": false,
89
+ "allow_jailbreak": false,
90
+ "always_force_name2": false,
91
+ "trim_sentences": false,
92
+ "include_newline": false,
93
+ "single_line": false,
94
+ "name": "Dan-ChatML"
95
+ }
96
+
97
+
98
+ instruct template
99
+
100
+ {
101
+ "system_prompt": "Write {{char}}'s actions and dialogue, user will write {{user}}'s.",
102
+ "input_sequence": "<|im_start|>user\n",
103
+ "output_sequence": "<|im_start|>assistant\n",
104
+ "first_output_sequence": "",
105
+ "last_output_sequence": "",
106
+ "system_sequence_prefix": "",
107
+ "system_sequence_suffix": "",
108
+ "stop_sequence": "<|im_end|>",
109
+ "wrap": false,
110
+ "macro": true,
111
+ "names": false,
112
+ "names_force_groups": false,
113
+ "activation_regex": "",
114
+ "skip_examples": false,
115
+ "output_suffix": "<|im_end|>\n",
116
+ "input_suffix": "<|im_end|>\n",
117
+ "system_sequence": "<|im_start|>system\n",
118
+ "system_suffix": "<|im_end|>\n",
119
+ "user_alignment_message": "",
120
+ "last_system_sequence": "",
121
+ "system_same_as_user": false,
122
+ "first_input_sequence": "",
123
+ "last_input_sequence": "",
124
+ "name": "Dan-ChatML"
125
+ }
126
+
127
+
128
+ Training
129
+
130
+ This model was full finetuned for 4 epochs on 8x H100 equating to 21 hours.
131
+
132
+ ---
133
  ## Use with llama.cpp
134
  Install llama.cpp through brew (works on Mac and Linux)
135