Update README.md
Browse files
README.md
CHANGED
@@ -116,8 +116,8 @@ You will first need to install `transformers` and `accelerate` (just to ease the
|
|
116 |
```python
|
117 |
import torch
|
118 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
119 |
-
model = AutoModelForCausalLM.from_pretrained("DRXD1000/Phoenix", torch_dtype=torch.bfloat16, device_map="auto")
|
120 |
-
tokenizer = AutoTokenizer.from_pretrained("DRXD1000/Phoenix")
|
121 |
prompt = """<|system|>
|
122 |
</s>
|
123 |
<|user|>
|
@@ -131,9 +131,9 @@ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
|
131 |
|
132 |
## Ethical Considerations and Limitations
|
133 |
|
134 |
-
As with all LLMs, the potential outputs of `DRXD1000/Phoenix` cannot be predicted
|
135 |
in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
|
136 |
-
to user prompts. Therefore, before deploying any applications of `DRXD1000/Phoenix`, developers should
|
137 |
perform safety testing and tuning tailored to their specific applications of the model.
|
138 |
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
|
139 |
|
|
|
116 |
```python
|
117 |
import torch
|
118 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
119 |
+
model = AutoModelForCausalLM.from_pretrained("DRXD1000/Phoenix-GPTQ", torch_dtype=torch.bfloat16, device_map="auto")
|
120 |
+
tokenizer = AutoTokenizer.from_pretrained("DRXD1000/Phoenix-GPTQ")
|
121 |
prompt = """<|system|>
|
122 |
</s>
|
123 |
<|user|>
|
|
|
131 |
|
132 |
## Ethical Considerations and Limitations
|
133 |
|
134 |
+
As with all LLMs, the potential outputs of `DRXD1000/Phoenix-GPTQ` cannot be predicted
|
135 |
in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses
|
136 |
+
to user prompts. Therefore, before deploying any applications of `DRXD1000/Phoenix-GPTQ`, developers should
|
137 |
perform safety testing and tuning tailored to their specific applications of the model.
|
138 |
Please see Meta's [Responsible Use Guide](https://ai.meta.com/llama/responsible-use-guide/).
|
139 |
|