|
--- |
|
library_name: peft |
|
base_model: mistralai/Mistral-7B-v0.1 |
|
language: |
|
- en |
|
tags: |
|
- Δ |
|
- LoRA |
|
--- |
|
|
|
<!-- |
|
# Model Card for Model ID |
|
--> |
|
|
|
## Model Details |
|
|
|
<!--![image/png](https://cdn-uploads.huggingface.co/production/uploads/648b0f4fd8fe693f51de98d2/aerBANxBtCya732NdBiw0.png)--> |
|
$$ |
|
W_{mistral} + LoRA_{hermes} = W_{hermes} \\ |
|
W_{hermes} - LoRA_{hermes} = W_{mistral} |
|
$$ |
|
|
|
|
|
### Why Though? |
|
unfortunately this is not as simple as [typeof/zephyr-7b-beta-lora](https://huggingface.co/typeof/zephyr-7b-beta-lora) |
|
due to the way in which [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) was trained... |
|
by adding tokens, the corresponance is not 1-to-1 with [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) |
|
as is the case with [typeof/zephyr-7b-beta-lora](https://huggingface.co/typeof/zephyr-7b-beta-lora) ... |
|
nevertheless, if you have found yourself here, I'm sure you can figure out how to use it... if not, open up an issue! |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ox7zGoygsJQFFV3rLT4v9.png) |
|
photo courtesy @teknium [OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) was trained... |
|
|
|
<!-- |
|
$$ W_{mistral} + LoRA_{zephyr} = W_{zephyr} $$ |
|
``` |
|
typeof/zephyr-7b-beta-lora + mistralai/Mistral-7B-v0.1 |
|
= HuggingFaceH4/zephyr-7b-beta |
|
```` |
|
|
|
### Model Description |
|
|
|
- **Developed by:** [More Information Needed] |
|
- **Funded by [optional]:** [More Information Needed] |
|
- **Shared by [optional]:** [More Information Needed] |
|
- **Model type:** [More Information Needed] |
|
- **Language(s) (NLP):** [More Information Needed] |
|
- **License:** [More Information Needed] |
|
- **Finetuned from model [optional]:** [More Information Needed] |
|
|
|
|
|
### Model Sources [optional] |
|
|
|
- **Repository:** [More Information Needed] |
|
- **Paper [optional]:** [More Information Needed] |
|
- **Demo [optional]:** [More Information Needed] |
|
|
|
## Uses |
|
|
|
### Direct Use |
|
|
|
[More Information Needed] |
|
|
|
### Downstream Use [optional] |
|
|
|
[More Information Needed] |
|
|
|
### Out-of-Scope Use |
|
|
|
[More Information Needed] |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
[More Information Needed] |
|
|
|
### Recommendations |
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. |
|
--> |
|
|
|
## How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
[More Information Needed] |
|
|
|
<!-- |
|
|
|
```python |
|
# pip install transformers peft |
|
|
|
import torch |
|
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer |
|
|
|
model_id = "mistralai/Mistral-7B-v0.1" |
|
peft_model_id = "typeof/zephyr-7b-beta-lora" |
|
|
|
model = AutoModelForCausalLM.from_pretrained(model_id) |
|
model.load_adapter(peft_model_id) |
|
|
|
tokenizer_id = "HuggingFaceH4/zephyr-7b-beta" # for chat template etc... |
|
tokenizer = AutoTokenizer.from_pretrained(tokenizer_id) |
|
|
|
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer) |
|
|
|
messages = [ |
|
{ |
|
"role": "system", |
|
"content": "You are a friendly chatbot who always responds in the style of a pirate", |
|
}, |
|
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"}, |
|
] |
|
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
``` |
|
<|system|> |
|
You are a friendly chatbot who always responds in the style of a pirate</s> |
|
<|user|> |
|
How many helicopters can a human eat in one sitting?</s> |
|
<|assistant|> |
|
Well, me matey, that’s a good question indeed! I’ve never seen |
|
a human eat a helicopter, and I don’t think many others have |
|
either. However, I’ve heard rumors that some people have |
|
eaten entire airplanes, so I suppose it’s not entirely unheard |
|
of. |
|
|
|
As for the number of helicopters one could eat, that depends |
|
on the size and weight of the helicopter. A small, lightweight |
|
helicopter would be easier to eat than a large, heavy one. |
|
In fact, I’ve heard that some people have eaten entire helicopters |
|
as part of a dare or a challenge. |
|
|
|
So, my advice to you, me hearty, is to steer clear of helicopters |
|
and stick to more traditional fare. Yarr!</s> |
|
``` |
|
--> |
|
#### Summary |
|
|
|
A fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) |
|
|
|
[LoRA](https://arxiv.org/abs/2305.14314) |
|
|
|
[QLoRA](https://arxiv.org/abs/2106.09685) |