Spaces:
Sleeping
Sleeping
frankaging
commited on
Commit
·
f502fcb
1
Parent(s):
eddcee7
update terms
Browse files
app.py
CHANGED
@@ -22,9 +22,11 @@ MAX_INPUT_TOKEN_LENGTH = int(os.getenv("MAX_INPUT_TOKEN_LENGTH", "4096"))
|
|
22 |
DESCRIPTION = """\
|
23 |
# ReFT-Chat (Llama-2 7B with 1K examples)
|
24 |
|
25 |
-
###
|
|
|
26 |
|
27 |
-
|
|
|
28 |
"""
|
29 |
|
30 |
LICENSE = """
|
|
|
22 |
DESCRIPTION = """\
|
23 |
# ReFT-Chat (Llama-2 7B with 1K examples)
|
24 |
|
25 |
+
### What's ReFT-Chat?
|
26 |
+
ReFT-Chat is a chatbot built with ReFT and Llama-2 7B. It is trained with 1K training examples from the unpaired [Ultrafeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedback). It is not good at multi-turn conversations. You can train your own ReFT agent and share it on HuggingFace by following this [tutorial](https://github.com/stanfordnlp/pyreft/tree/main/examples/gradio/train_and_share.ipynb)!
|
27 |
|
28 |
+
### Usage Terms
|
29 |
+
This should only be used for research purposes. We did not conduct additional safety training with ReFT. We evaluate this model using [Alpaca-eval](https://github.com/tatsu-lab/alpaca_eval). Performance results can be found in [our ReFT paper](https://arxiv.org/abs/2404.03592). Our model inherits all the underlying risks associated with Llama. See terms outlined below.
|
30 |
"""
|
31 |
|
32 |
LICENSE = """
|