LHK_DPO_v1

LHK_DPO_v1 is trained via Direct Preference Optimization(DPO) from TomGrc/FusionNet_7Bx2_MoE_14B.

Details

coming sooon.

Evaluation Results

coming soon.

Contamination Results

coming soon.

Downloads last month
9
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.