File size: 709 Bytes
066f073
3533d3c
 
 
 
 
 
ec39536
 
3533d3c
b23d856
 
066f073
b23d856
 
 
 
 
 
066f073
b23d856
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22

---
license: llama2
base_model:
- unsloth/llama-2-13b
- layoric/llama-2-13b-code-alpaca
- vanillaOVO/WizardMath-13B-V1.0
tags:
- merge
---
# AIM Paper Checkpoints Uploaded For Replication
This repository includes one of the checkpoints used in the paper "Activation-Informed Merging of Large Language Models". Specifics of this model are as follows:

- **Merging Method:** dare_linear
- **Models Used In Merging**
    - ***Base Model:*** unsloth/llama-2-13b
    - ***Code:*** layoric/llama-2-13b-code-alpaca
    - ***Math:*** vanillaOVO/WizardMath-13B-V1.0
- **AIM:** True

Benchmark results and paper details can be found at the official [GitHub](https://github.com/ahnobari/ActivationInformedMerging.git).