abideen commited on
Commit
3ed677a
1 Parent(s): 1d71eac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -0
README.md CHANGED
@@ -11,14 +11,21 @@ tags:
11
  base_model:
12
  - mlabonne/AlphaMonarch-7B
13
  - Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0
 
 
 
14
  ---
15
 
16
  # MonarchCoder-MoE-2x7B
17
 
 
 
18
  MonarchCoder-MoE-2x7B is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
19
  * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
20
  * [Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0](https://huggingface.co/Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0)
21
 
 
 
22
  ## 🧩 Configuration
23
 
24
  ```yaml
 
11
  base_model:
12
  - mlabonne/AlphaMonarch-7B
13
  - Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0
14
+ language:
15
+ - en
16
+ library_name: transformers
17
  ---
18
 
19
  # MonarchCoder-MoE-2x7B
20
 
21
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/eoHRSEuT-_TtlrPX7PrOW.jpeg)
22
+
23
  MonarchCoder-MoE-2x7B is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
24
  * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
25
  * [Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0](https://huggingface.co/Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0)
26
 
27
+ The main aim behind creating this model is to create a model that performs well in reasoning, conversation, and coding. AlphaMonarch pperforms amazing on reasoning and conversation tasks. Merging AlphaMonarch with a coding model yielded MonarchCoder-2x7B which performs better on OpenLLM, Nous, and HumanEval benchmark.
28
+
29
  ## 🧩 Configuration
30
 
31
  ```yaml