Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,7 @@ tags:
|
|
13 |
- true-sequential: yes
|
14 |
- act-order: yes
|
15 |
- 8-bit quantized - Read more about this here: https://github.com/ggerganov/llama.cpp/pull/951
|
16 |
-
- Conversion process:
|
17 |
|
18 |
<br>
|
19 |
<br>
|
@@ -93,12 +93,15 @@ pip install -r requirements.txt
|
|
93 |
|
94 |
# License
|
95 |
|
96 |
-
Research only.
|
97 |
|
98 |
LLaMA-13B converted to work with Transformers/HuggingFace is under a special license, please see the LICENSE file for details.
|
99 |
|
100 |
https://www.reddit.com/r/LocalLLaMA/comments/12kl68j/comment/jg31ufe/
|
101 |
|
|
|
|
|
|
|
102 |
# Vicuna Model Card
|
103 |
|
104 |
## Model details
|
|
|
13 |
- true-sequential: yes
|
14 |
- act-order: yes
|
15 |
- 8-bit quantized - Read more about this here: https://github.com/ggerganov/llama.cpp/pull/951
|
16 |
+
- Conversion process: LLaMa 13B -> LLaMa 13B HF -> Vicuna13B-v1.1 HF -> Vicuna13B-v1.1-8bit-128g
|
17 |
|
18 |
<br>
|
19 |
<br>
|
|
|
93 |
|
94 |
# License
|
95 |
|
96 |
+
Research only - non-commercial research purposes - other restrictions apply. See inherited LICENSE file from LLaMa.
|
97 |
|
98 |
LLaMA-13B converted to work with Transformers/HuggingFace is under a special license, please see the LICENSE file for details.
|
99 |
|
100 |
https://www.reddit.com/r/LocalLLaMA/comments/12kl68j/comment/jg31ufe/
|
101 |
|
102 |
+
<br>
|
103 |
+
<br>
|
104 |
+
|
105 |
# Vicuna Model Card
|
106 |
|
107 |
## Model details
|