mihaimasala commited on
Commit
c9654c1
1 Parent(s): 0112c3c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -54,8 +54,8 @@ Use the code below to get started with the model.
54
  ```python
55
  from transformers import AutoTokenizer, AutoModelForCausalLM
56
 
57
- tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Base-v1")
58
- model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Base-v1")
59
 
60
  input_text = "Mihai Eminescu a fost "
61
  input_ids = tokenizer(input_text, return_tensors="pt")
@@ -69,18 +69,18 @@ print(tokenizer.decode(outputs[0]))
69
  | Model | Average | ARC | MMLU |Winogrande|HellaSwag | GSM8k |TruthfulQA|
70
  |--------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
71
  | Llama-2-7b | 35.65 | 33.85 | 30.93 | 56.43 | 46.98 | 1.37 | 44.36 |
72
- | *RoLlama2-7b-Base-v1* | *38.32* | *35.83* | *30.47* | *60.16* | *55.52* | *2.17* | *45.78* |
73
  | Llama-2-7b-chat | 35.58 | 34.92 | 32.37 | 54.26 | 44.52 | 2.05 | 45.38 |
74
- |RoLlama2-7b-Instruct-v1| **44.42**|**40.36** |**37.41** |**69.58** | 55.64 | **17.59**| 45.96 |
75
- |RoLlama2-7b-Chat-v1 | 42.65 | 38.29 | 35.27 | 65.25 | **56.45**| 12.84 | **47.79**|
76
 
77
  ## MT-Bench
78
 
79
  | Model | Average | 1st turn | 2nd turn |
80
  |--------------------|:--------:|:--------:|:--------:|
81
  | Llama-2-7b-chat | 1.70 | 2.00 | 1.41 |
82
- |RoLlama2-7b-Instruct-v1| **4.31**|**5.66** | 2.95 |
83
- |RoLlama2-7b-Chat-v1 | 3.91 | 4.25 | **3.57** |
84
 
85
 
86
 
@@ -88,9 +88,9 @@ print(tokenizer.decode(outputs[0]))
88
 
89
  | Model | Link |
90
  |--------------------|:--------:|
91
- |*RoLlama2-7b-Base-v1* | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base-v1) |
92
- |RoLlama2-7b-Instruct-v1| [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Instruct-v1) |
93
- |RoLlama2-7b-Chat-v1 | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Chat-v1) |
94
 
95
 
96
  <!--
 
54
  ```python
55
  from transformers import AutoTokenizer, AutoModelForCausalLM
56
 
57
+ tokenizer = AutoTokenizer.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Base")
58
+ model = AutoModelForCausalLM.from_pretrained("OpenLLM-Ro/RoLlama2-7b-Base")
59
 
60
  input_text = "Mihai Eminescu a fost "
61
  input_ids = tokenizer(input_text, return_tensors="pt")
 
69
  | Model | Average | ARC | MMLU |Winogrande|HellaSwag | GSM8k |TruthfulQA|
70
  |--------------------|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|:--------:|
71
  | Llama-2-7b | 35.65 | 33.85 | 30.93 | 56.43 | 46.98 | 1.37 | 44.36 |
72
+ | *RoLlama2-7b-Base* | *38.32* | *35.83* | *30.47* | *60.16* | *55.52* | *2.17* | *45.78* |
73
  | Llama-2-7b-chat | 35.58 | 34.92 | 32.37 | 54.26 | 44.52 | 2.05 | 45.38 |
74
+ |RoLlama2-7b-Instruct| **44.42**|**40.36** |**37.41** |**69.58** | 55.64 | **17.59**| 45.96 |
75
+ |RoLlama2-7b-Chat | 42.65 | 38.29 | 35.27 | 65.25 | **56.45**| 12.84 | **47.79**|
76
 
77
  ## MT-Bench
78
 
79
  | Model | Average | 1st turn | 2nd turn |
80
  |--------------------|:--------:|:--------:|:--------:|
81
  | Llama-2-7b-chat | 1.70 | 2.00 | 1.41 |
82
+ |RoLlama2-7b-Instruct| **4.31**|**5.66** | 2.95 |
83
+ |RoLlama2-7b-Chat | 3.91 | 4.25 | **3.57** |
84
 
85
 
86
 
 
88
 
89
  | Model | Link |
90
  |--------------------|:--------:|
91
+ |*RoLlama2-7b-Base* | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Base) |
92
+ |RoLlama2-7b-Instruct| [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Instruct) |
93
+ |RoLlama2-7b-Chat | [link](https://huggingface.co/OpenLLM-Ro/RoLlama2-7b-Chat) |
94
 
95
 
96
  <!--