Update README.md
Browse files
README.md
CHANGED
@@ -38,39 +38,22 @@ Naberius-7B is a Mistral-class spherical linear interpolated merge of three high
|
|
38 |
These models were hand picked after careful review of claims, datasets, and user postings.
|
39 |
The core elements that dictated which models to accept hinged on these values:
|
40 |
logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users.
|
41 |
-
|
42 |
-
Visit our Project Git here: https://github.com/Digitous/LLM-SLERP-Merge
|
43 |
-
|
44 |
|
45 |
## What Makes Naberius Special?
|
46 |
-
|
47 |
-
|
48 |
-
By combining zephyr-7b-sft-beta and OpenHermes-2-Mistral-7B, then adding dolphin-2.2.1-mistral-7b to the result using a minimally destructive merge technique, preserves a large amount of behavior of all three models in a cohesive fashion.
|
49 |
-
|
50 |
-
|
51 |
-
Naberius can: Do coherent roleplay far and beyond any 7B parameter model ever before, as well as follow instructions exceptionally well, especially for a 7B model and as a bonus for being lightweight, incredible inference speed.
|
52 |
-
Naberius has shown some signs of spacial awareness and does adapt to nuance in conversation. All around a pliable, imaginative, and logic oriented 7B that punches upwards to what feels like a 30B or more at times.
|
53 |
-
|
54 |
-
|
55 |
Naberius can't: walk your dog, do your homework, clean your dishes, tell you to turn off the computer and go to bed on time.
|
56 |
-
|
57 |
-
|
58 |
# Ensemble Credits:
|
59 |
-
|
60 |
-
|
61 |
All models merged to create are LLaMAv2-7B Mistral-7B Series.
|
62 |
|
63 |
|
64 |
-
zephyr-7b-sft-beta
|
65 |
-
|
66 |
-
|
67 |
-
OpenHermes-2-Mistral-7B
|
68 |
-
|
69 |
-
|
70 |
-
dolphin-2.2.1-mistral-7b
|
71 |
-
|
72 |
-
|
73 |
-
Thanks to Mistral AI for the amazing Mistral LM - and also thanks to Meta for LLaMAv2.
|
74 |
|
|
|
75 |
Thanks to each and every one of you for your incredible work developing some of the best things
|
76 |
to come out of this community.
|
|
|
38 |
These models were hand picked after careful review of claims, datasets, and user postings.
|
39 |
The core elements that dictated which models to accept hinged on these values:
|
40 |
logic, imagination, and aversion to censorship such as: railroading/gaslighting users instead of accomodating users.
|
41 |
+
## Our implementation of Spherical Linear Interpolation used for this project:
|
42 |
+
Visit our Project Git here: https://github.com/Digitous/LLM-SLERP-Merge
|
43 |
+
Spherical Linear Interpolation merging produces more coherently smooth merges than standard weight-merge, also known as LERP (Linear) interpolation.
|
44 |
|
45 |
## What Makes Naberius Special?
|
46 |
+
By combining zephyr-7b-sft-beta and OpenHermes-2-Mistral-7B, then adding dolphin-2.2.1-mistral-7b to the result using a minimally destructive merge technique, preserves a large amount of behavior of all three models in a cohesive fashion.
|
47 |
+
Naberius can: Do coherent roleplay far and beyond any 7B parameter model ever before, as well as follow instructions exceptionally well, especially for a 7B model and as a bonus for being lightweight, incredible inference speed. Naberius has shown some signs of spacial awareness and does adapt to nuance in conversation. All around a pliable, imaginative, and logic oriented 7B that punches upwards to what feels like a 30B or more at times.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
Naberius can't: walk your dog, do your homework, clean your dishes, tell you to turn off the computer and go to bed on time.
|
|
|
|
|
49 |
# Ensemble Credits:
|
|
|
|
|
50 |
All models merged to create are LLaMAv2-7B Mistral-7B Series.
|
51 |
|
52 |
|
53 |
+
zephyr-7b-sft-beta
|
54 |
+
OpenHermes-2-Mistral-7B
|
55 |
+
dolphin-2.2.1-mistral-7b
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
56 |
|
57 |
+
Thanks to Mistral AI for the amazing Mistral LM - and also thanks to Meta for LLaMAv2.
|
58 |
Thanks to each and every one of you for your incredible work developing some of the best things
|
59 |
to come out of this community.
|