File size: 2,561 Bytes
757887b 7b51c7b b4721ad 757887b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
license: apache-2.0
language:
- en
- he
library_name: transformers
---
# Hebrew-Mistral-7B-200K
> **Please note: There has been some issues reported about this model, updates coming soon.**
Hebrew-Mistral-7B-200K is an open-source Large Language Model (LLM) pretrained in hebrew and english pretrained with 7B billion parameters and with 200K context length, based on Mistral-7B-v1.0 from Mistral.
It has an extended hebrew tokenizer with 64,000 tokens and is continuesly pretrained from Mistral-7B on tokens in both English and Hebrew.
The resulting model is a powerful general-purpose language model suitable for a wide range of natural language processing tasks, with a focus on Hebrew language understanding and generation.
### Usage
Below are some code snippets on how to get quickly started with running the model.
First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
### Running on CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Mistral-7B-200K")
model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Mistral-7B-200K")
input_text = "ืฉืืื! ืื ืฉืืืื ืืืื?"
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
### Running on GPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Mistral-7B-200K")
model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Mistral-7B-200K", device_map="auto")
input_text = "ืฉืืื! ืื ืฉืืืื ืืืื?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
### Running with 4-Bit precision
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
tokenizer = AutoTokenizer.from_pretrained("yam-peleg/Hebrew-Mistral-7B-200K")
model = AutoModelForCausalLM.from_pretrained("yam-peleg/Hebrew-Mistral-7B-200K", quantization_config = BitsAndBytesConfig(load_in_4bit=True))
input_text = "ืฉืืื! ืื ืฉืืืื ืืืื?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0])
```
### Notice
Hebrew-Mistral-7B-200K is a pretrained base model and therefore does not have any moderation mechanisms.
### Authors
- Trained by Yam Peleg.
|