Update README.md
Browse files
README.md
CHANGED
@@ -13,10 +13,10 @@ With LLaMA2-Accessory, mixtral-8x7b enjoys the following features:
|
|
13 |
4. Distributed and/or quantized inference
|
14 |
|
15 |
## 🔥 Online Demo
|
16 |
-
We host a web demo at
|
17 |
[evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) and
|
18 |
[ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k), with LoRA and Bias tuning.
|
19 |
Please note that this is a temporary link, and we will update our official permanent link today.
|
20 |
|
21 |
## 💡 Tutorial
|
22 |
-
A detailed tutorial is available at
|
|
|
13 |
4. Distributed and/or quantized inference
|
14 |
|
15 |
## 🔥 Online Demo
|
16 |
+
We host a web demo at [here](http://106.14.127.192/), which shows a mixtral-8x7b model finetuned on
|
17 |
[evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) and
|
18 |
[ultrachat_200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k), with LoRA and Bias tuning.
|
19 |
Please note that this is a temporary link, and we will update our official permanent link today.
|
20 |
|
21 |
## 💡 Tutorial
|
22 |
+
A detailed tutorial is available at our [document](https://llama2-accessory.readthedocs.io/en/latest/projects/mixtral-8x7b.html)
|