Compare-Answer Model
Welcome to the repository for the Compare-Answer Model, an innovative tool designed to enhance the accuracy and efficiency of mathematical answer comparison tasks. This model leverages advanced techniques to provide precise comparisons across a wide range of mathematical problems.
Features
- High Accuracy: Utilizes state-of-the-art technology to ensure high reliability in answer comparison.
- Broad Compatibility: Supports a variety of mathematical problem types and formats.
- Easy Integration: Designed for easy integration with existing systems and workflows.
Installation
To get started with the Compare-Answer Model, clone this repository and load model with Transformers.
Quick Start
To use the model, import it and call the main comparison function with the required parameters:
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype="auto", device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
def build_user_query(question, pred_answer, answer, base_prompt):
input_text = base_prompt.replace("{{question}}", question)
input_text = input_text.replace("{{pred_step}}", pred_answer)
input_text = input_text.replace("{{answer}}", answer)
input_text = input_text.replace("{{analysis}}", "") # default set analysis to blank, if exist, you can pass in the corresponding parameter.
return input_text
chat_prompt = """<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>human
{}<|im_end|>
<|im_start|>gpt
"""
basic_prompt = """## 任务描述\n \n你是一个数学老师,学生提交了题目的解题步骤,你需要参考`题干`,`解析`和`答案`,判断`学生解题步骤`的结果是否正确。忽略`学生解题步骤`中的错误,只关注最后的答案。答案可能出现在`解析`中,也可能出现在`答案`中。\n \n## 输入内容\n \n题干:\n \n```\n{{question}}\n```\n \n解析:\n \n```\n{{analysis}}\n \n```\n \n答案:\n \n```\n{{answer}}\n```\n \n学生解题步骤:\n \n```\n{{pred_step}}\n```\n \n输出:"""
base_prompt = chat_prompt.format(basic_prompt)
def build_user_query(question, pred_answer, answer, base_prompt):
input_text = base_prompt.replace("{{question}}", question)
input_text = input_text.replace("{{pred_step}}", pred_answer)
input_text = input_text.replace("{{answer}}", answer)
input_text = input_text.replace("{{analysis}}", "") # default set analysis to blank, if exist, you can pass in the corresponding parameter.
return input_text
prompt = build_user_query("1+1=", "3", "2", base_prompt)
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
generated_ids = model.generate(model_inputs.input_ids, temperature=0, max_new_tokens=16, eos_token_id=100005)
generated_ids = [
output_ids[len(input_ids) :]
for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)[0]
Documentation
For more detailed information about the model's API and functionalities, please contact us.
Contributing
Contributions to the Compare-Answer Model are welcome! If you have suggestions or improvements, please fork the repository and submit a pull request.
License
This project is licensed under the MIT License - see the LICENSE.md file for details.
Acknowledgements
Thanks to all contributors who have helped in developing this model. Special thanks to MathEval for providing the datasets and challenges that inspired this project.
Contact
For any inquiries, please reach out via email at [email protected] or open an issue in this repository.
Thank you for using or contributing to the Compare-Answer Model!
- Downloads last month
- 385