--- base_model: - lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half library_name: transformers tags: - mergekit - merge --- # polyglot This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [NearSwap](https://huggingface.co/alchemonaut/QuartetAnemoi-70B-t0.0001) merge method using merge/poly3 as a base. ### Models Merged The following models were included in the merge: * [lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half](https://huggingface.co/lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: merge/poly3 chat_template: llama3 dtype: float32 merge_method: nearswap parameters: t: 0.0001 slices: - sources: - layer_range: [0, 32] model: lightblue/suzume-llama-3-8B-multilingual-orpo-borda-half - layer_range: [0, 32] model: merge/poly3 tokenizer: pad_to_multiple_of: 4 ```