This is the backbone of our "the Code model" used in the paper "DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging".

The detailed training/evaluation information can be found at https://api.wandb.ai/links/merge_exp/jhdkzbi2.

For more details about this model, please refer to our paper.

If you found this model useful, please cite our paper:

@article{lin2024dogerm,
  title={DogeRM: Equipping Reward Models with Domain Knowledge through Model Merging},
  author={Lin, Tzu-Han and Li, Chen-An and Lee, Hung-yi and Chen, Yun-Nung},
  journal={arXiv preprint arXiv:2407.01470},
  year={2024}
}
Downloads last month
41
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for miulab/llama2-7b-oss-instruct

Finetuned
(619)
this model
Finetunes
1 model

Dataset used to train miulab/llama2-7b-oss-instruct

Collection including miulab/llama2-7b-oss-instruct