scottsuk0306
commited on
Commit
•
1d1dbde
1
Parent(s):
50181f6
Upload folder using huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,185 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- prometheus-eval/Feedback-Collection
|
4 |
+
- prometheus-eval/Preference-Collection
|
5 |
+
language:
|
6 |
+
- en
|
7 |
+
library_name: transformers
|
8 |
+
license: apache-2.0
|
9 |
+
metrics:
|
10 |
+
- pearsonr
|
11 |
+
- spearmanr
|
12 |
+
- kendall-tau
|
13 |
+
- accuracy
|
14 |
+
pipeline_tag: text2text-generation
|
15 |
+
tags:
|
16 |
+
- text2text-generation
|
17 |
+
- easyquant
|
18 |
+
- gguf
|
19 |
+
- easyquant
|
20 |
+
- gguf
|
21 |
+
---
|
22 |
+
## Links for Reference
|
23 |
+
|
24 |
+
- **Homepage: In Progress**
|
25 |
+
- **Repository:https://github.com/prometheus-eval/prometheus-eval**
|
26 |
+
- **Paper:https://arxiv.org/abs/2405.01535**
|
27 |
+
- **Point of Contact:[email protected]**
|
28 |
+
|
29 |
+
# TL;DR
|
30 |
+
Prometheus 2 is an alternative of GPT-4 evaluation when doing fine-grained evaluation of an underlying LLM & a Reward model for Reinforcement Learning from Human Feedback (RLHF).
|
31 |
+
![plot](./finegrained_eval.JPG)
|
32 |
+
|
33 |
+
Prometheus 2 is a language model using [Mistral-Instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) as a base model.
|
34 |
+
It is fine-tuned on 100K feedback within the [Feedback Collection](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) and 200K feedback within the [Preference Collection](https://huggingface.co/datasets/prometheus-eval/Preference-Collection).
|
35 |
+
It is also made by weight merging to support both absolute grading (direct assessment) and relative grading (pairwise ranking).
|
36 |
+
The surprising thing is that we find weight merging also improves performance on each format.
|
37 |
+
|
38 |
+
# Model Details
|
39 |
+
|
40 |
+
## Model Description
|
41 |
+
|
42 |
+
- **Model type:** Language model
|
43 |
+
- **Language(s) (NLP):** English
|
44 |
+
- **License:** Apache 2.0
|
45 |
+
- **Related Models:** [All Prometheus Checkpoints](https://huggingface.co/models?search=prometheus-eval/Prometheus)
|
46 |
+
- **Resources for more information:**
|
47 |
+
- [Research paper](https://arxiv.org/abs/2405.01535)
|
48 |
+
- [GitHub Repo](https://github.com/prometheus-eval/prometheus-eval)
|
49 |
+
|
50 |
+
|
51 |
+
Prometheus is trained with two different sizes (7B and 8x7B).
|
52 |
+
You could check the 8x7B sized LM on [this page](https://huggingface.co/prometheus-eval/prometheus-2-8x7b-v2.0).
|
53 |
+
Also, check out our dataset as well on [this page](https://huggingface.co/datasets/prometheus-eval/Feedback-Collection) and [this page](https://huggingface.co/datasets/prometheus-eval/Preference-Collection).
|
54 |
+
|
55 |
+
## Prompt Format
|
56 |
+
|
57 |
+
We have made wrapper functions and classes to conveniently use Prometheus 2 at [our github repository](https://github.com/prometheus-eval/prometheus-eval).
|
58 |
+
We highly recommend you use it!
|
59 |
+
|
60 |
+
However, if you just want to use the model for your use case, please refer to the prompt format below.
|
61 |
+
Note that absolute grading and relative grading requires different prompt templates and system prompts.
|
62 |
+
|
63 |
+
### Absolute Grading (Direct Assessment)
|
64 |
+
Prometheus requires 4 components in the input: An instruction, a response to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below.
|
65 |
+
You should fill in the instruction, response, reference answer, criteria description, and score description for score in range of 1 to 5.
|
66 |
+
|
67 |
+
Fix the components with \{text\} inside.
|
68 |
+
```
|
69 |
+
###Task Description:
|
70 |
+
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
|
71 |
+
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
|
72 |
+
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
|
73 |
+
3. The output format should look as follows: \"Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)\"
|
74 |
+
4. Please do not generate any other opening, closing, and explanations.
|
75 |
+
|
76 |
+
###The instruction to evaluate:
|
77 |
+
{orig_instruction}
|
78 |
+
|
79 |
+
###Response to evaluate:
|
80 |
+
{orig_response}
|
81 |
+
|
82 |
+
###Reference Answer (Score 5):
|
83 |
+
{orig_reference_answer}
|
84 |
+
|
85 |
+
###Score Rubrics:
|
86 |
+
[{orig_criteria}]
|
87 |
+
Score 1: {orig_score1_description}
|
88 |
+
Score 2: {orig_score2_description}
|
89 |
+
Score 3: {orig_score3_description}
|
90 |
+
Score 4: {orig_score4_description}
|
91 |
+
Score 5: {orig_score5_description}
|
92 |
+
|
93 |
+
###Feedback:
|
94 |
+
```
|
95 |
+
|
96 |
+
After this, you should apply the conversation template of Mistral (not applying it might lead to unexpected behaviors).
|
97 |
+
You can find the conversation class at this [link](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py).
|
98 |
+
```
|
99 |
+
conv = get_conv_template("mistral")
|
100 |
+
conv.set_system_message("You are a fair judge assistant tasked with providing clear, objective feedback based on specific criteria, ensuring each assessment reflects the absolute standards set for performance.")
|
101 |
+
conv.append_message(conv.roles[0], dialogs['instruction'])
|
102 |
+
conv.append_message(conv.roles[1], None)
|
103 |
+
prompt = conv.get_prompt()
|
104 |
+
|
105 |
+
x = tokenizer(prompt,truncation=False)
|
106 |
+
```
|
107 |
+
|
108 |
+
As a result, a feedback and score decision will be generated, divided by a separating phrase ```[RESULT]```
|
109 |
+
|
110 |
+
### Relative Grading (Pairwise Ranking)
|
111 |
+
Prometheus requires 4 components in the input: An instruction, 2 responses to evaluate, a score rubric, and a reference answer. You could refer to the prompt format below.
|
112 |
+
You should fill in the instruction, 2 responses, reference answer, and criteria description.
|
113 |
+
|
114 |
+
Fix the components with \{text\} inside.
|
115 |
+
```
|
116 |
+
###Task Description:
|
117 |
+
An instruction (might include an Input inside it), a response to evaluate, and a score rubric representing a evaluation criteria are given.
|
118 |
+
1. Write a detailed feedback that assess the quality of two responses strictly based on the given score rubric, not evaluating in general.
|
119 |
+
2. After writing a feedback, choose a better response between Response A and Response B. You should refer to the score rubric.
|
120 |
+
3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (A or B)"
|
121 |
+
4. Please do not generate any other opening, closing, and explanations.
|
122 |
+
|
123 |
+
###Instruction:
|
124 |
+
{orig_instruction}
|
125 |
+
|
126 |
+
###Response A:
|
127 |
+
{orig_response_A}
|
128 |
+
|
129 |
+
###Response B:
|
130 |
+
{orig_response_B}
|
131 |
+
|
132 |
+
###Reference Answer:
|
133 |
+
{orig_reference_answer}
|
134 |
+
|
135 |
+
###Score Rubric:
|
136 |
+
{orig_criteria}
|
137 |
+
|
138 |
+
###Feedback:
|
139 |
+
```
|
140 |
+
|
141 |
+
After this, you should apply the conversation template of Mistral (not applying it might lead to unexpected behaviors).
|
142 |
+
You can find the conversation class at this [link](https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py).
|
143 |
+
```
|
144 |
+
conv = get_conv_template("mistral")
|
145 |
+
conv.set_system_message("You are a fair judge assistant assigned to deliver insightful feedback that compares individual performances, highlighting how each stands relative to others within the same cohort.")
|
146 |
+
conv.append_message(conv.roles[0], dialogs['instruction'])
|
147 |
+
conv.append_message(conv.roles[1], None)
|
148 |
+
prompt = conv.get_prompt()
|
149 |
+
|
150 |
+
x = tokenizer(prompt,truncation=False)
|
151 |
+
```
|
152 |
+
|
153 |
+
As a result, a feedback and score decision will be generated, divided by a separating phrase ```[RESULT]```
|
154 |
+
|
155 |
+
## License
|
156 |
+
Feedback Collection, Preference Collection, and Prometheus 2 are subject to OpenAI's Terms of Use for the generated data. If you suspect any violations, please reach out to us.
|
157 |
+
|
158 |
+
|
159 |
+
# Citation
|
160 |
+
|
161 |
+
|
162 |
+
If you find the following model helpful, please consider citing our paper!
|
163 |
+
|
164 |
+
**BibTeX:**
|
165 |
+
|
166 |
+
```bibtex
|
167 |
+
@misc{kim2023prometheus,
|
168 |
+
title={Prometheus: Inducing Fine-grained Evaluation Capability in Language Models},
|
169 |
+
author={Seungone Kim and Jamin Shin and Yejin Cho and Joel Jang and Shayne Longpre and Hwaran Lee and Sangdoo Yun and Seongjin Shin and Sungdong Kim and James Thorne and Minjoon Seo},
|
170 |
+
year={2023},
|
171 |
+
eprint={2310.08491},
|
172 |
+
archivePrefix={arXiv},
|
173 |
+
primaryClass={cs.CL}
|
174 |
+
}
|
175 |
+
```
|
176 |
+
```bibtex
|
177 |
+
@misc{kim2024prometheus,
|
178 |
+
title={Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models},
|
179 |
+
author={Seungone Kim and Juyoung Suk and Shayne Longpre and Bill Yuchen Lin and Jamin Shin and Sean Welleck and Graham Neubig and Moontae Lee and Kyungjae Lee and Minjoon Seo},
|
180 |
+
year={2024},
|
181 |
+
eprint={2405.01535},
|
182 |
+
archivePrefix={arXiv},
|
183 |
+
primaryClass={cs.CL}
|
184 |
+
}
|
185 |
+
```
|