File size: 3,655 Bytes
441510f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
language:
- en
pipeline_tag: text-generation
tags:
- Δ
- LoRA
---
<!--
# Model Card for Model ID
-->
## Model Details
<!--![image/png](https://cdn-uploads.huggingface.co/production/uploads/648b0f4fd8fe693f51de98d2/aerBANxBtCya732NdBiw0.png)-->
$$
W_{mistral} + LoRA_{hermes} = W_{hermes} \\
W_{hermes} - LoRA_{hermes} = W_{mistral}
$$
<!--
$$ W_{mistral} + LoRA_{zephyr} = W_{zephyr} $$
```
typeof/zephyr-7b-beta-lora + mistralai/Mistral-7B-v0.1
= HuggingFaceH4/zephyr-7b-beta
````
### Model Description
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
### Direct Use
[More Information Needed]
### Downstream Use [optional]
[More Information Needed]
### Out-of-Scope Use
[More Information Needed]
## Bias, Risks, and Limitations
[More Information Needed]
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
-->
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
<!--
```python
# pip install transformers peft
import torch
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Mistral-7B-v0.1"
peft_model_id = "typeof/zephyr-7b-beta-lora"
model = AutoModelForCausalLM.from_pretrained(model_id)
model.load_adapter(peft_model_id)
tokenizer_id = "HuggingFaceH4/zephyr-7b-beta" # for chat template etc...
tokenizer = AutoTokenizer.from_pretrained(tokenizer_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
```
<|system|>
You are a friendly chatbot who always responds in the style of a pirate</s>
<|user|>
How many helicopters can a human eat in one sitting?</s>
<|assistant|>
Well, me matey, that’s a good question indeed! I’ve never seen
a human eat a helicopter, and I don’t think many others have
either. However, I’ve heard rumors that some people have
eaten entire airplanes, so I suppose it’s not entirely unheard
of.
As for the number of helicopters one could eat, that depends
on the size and weight of the helicopter. A small, lightweight
helicopter would be easier to eat than a large, heavy one.
In fact, I’ve heard that some people have eaten entire helicopters
as part of a dare or a challenge.
So, my advice to you, me hearty, is to steer clear of helicopters
and stick to more traditional fare. Yarr!</s>
```
-->
#### Summary
A fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
[LoRA](https://arxiv.org/abs/2305.14314)
[QLoRA](https://arxiv.org/abs/2106.09685) |