File size: 2,225 Bytes
c5d36c6
 
 
 
d896dc2
 
 
 
c5d36c6
bdf2020
d896dc2
c39aa50
 
d896dc2
dd6c080
d896dc2
 
 
 
dd6c080
d896dc2
dd6c080
d896dc2
dd6c080
 
 
 
c4eb87a
d896dc2
 
c4eb87a
 
 
 
 
dd6c080
 
c4eb87a
d896dc2
 
 
 
dd6c080
d896dc2
dd6c080
 
d896dc2
dd6c080
 
 
 
d896dc2
 
dd6c080
d896dc2
dd6c080
d896dc2
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
language:
- en
- he
library_name: transformers
---
# Hebrew-Gemma-11B-Instruct

- **Base Model:** [Hebrew-Gemma-11B](https://huggingface.co/yam-peleg/Hebrew-Gemma-11B)
- **Instruct Model:** [Hebrew-Gemma-11B-Instruct](https://huggingface.co/yam-peleg/Hebrew-Gemma-11B-Instruct)

The Hebrew-Gemma-11B-Instruct Large Language Model (LLM) is a instruct fine-tuned version of the [Hebrew-Gemma-11B](https://huggingface.co/yam-peleg/Hebrew-Gemma-11B) generative text model using a variety of conversation datasets.

It is continued pretrain of gemma-7b, extended to a larger scale and trained on 3B additional tokens of both English and Hebrew text data.


# Instruction format

This format must be strictly respected, otherwise the model will generate sub-optimal outputs.

```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
Here is a simple hellow world program<end_of_turn><eos>
```

- The conversation starts with `<bos>`.
- Each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity (`user` or `model`).
- Turns finish with the `<end_of_turn>` token.
- Conversation finish with the `<eos>` token.
 
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's chat template.

A simple example using the tokenizer's chat template:

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "Hebrew-Gemma-11B-Instruct"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda")

chat = [
    { "role": "user", "content": "讻转讜讘 拽讜讚 驻砖讜讟 讘驻讬讬转讜谉 砖诪讚驻讬住 诇诪住讱 讗转 讛转讗专讬讱 砖诇 讛讬讜诐" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```

### Terms of Use

As an extention of Gemma-7B, this model is subject to the original license and terms of use by Google.

### Benchmark Results

- Coming Soon!


### Notice

Hebrew-Gemma-11B is a pretrained base model and therefore does not have any moderation mechanisms.


### Author

Trained by Yam Peleg.