retiredcarboxyl commited on
Commit
d0cbaf6
1 Parent(s): 99107e3

updated readme.md file and made some changes

Browse files
Files changed (1) hide show
  1. README.md +15 -11
README.md CHANGED
@@ -1,20 +1,19 @@
1
  ---
2
  tags:
3
  - merge
4
- - mergekit
5
- - lazymergekit
6
- - meta-llama/Llama-2-7b-chat-hf
7
- - meta-llama/Llama-2-7b-hf
8
  base_model:
9
  - meta-llama/Llama-2-7b-chat-hf
10
- - meta-llama/Llama-2-7b-hf
11
  language:
12
  - en
13
  ---
14
 
15
- # CarbonGPT
16
 
17
- CarbonGPT is a merge of the following models using [CarbonGPTMerger](https://colab.research.google.com/drive/1ECBKKGQeV3OhnpiIaThZEl8XMQaXNoEh?usp=sharing):
18
  * [meta-llama/Llama-2-70b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
19
  * [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
20
  * [google/gemma-7b](https://huggingface.co/google/gemma-7b)
@@ -25,12 +24,17 @@ CarbonGPT is a merge of the following models using [CarbonGPTMerger](https://col
25
  ```yaml
26
  slices:
27
  - sources:
28
- - model: meta-llama/Llama-2-7b-chat-hf
29
  layer_range: [0, 32]
30
- - model: meta-llama/Llama-2-7b-hf
31
  layer_range: [0, 32]
 
 
 
 
 
32
  merge_method: slerp
33
- base_model: meta-llama/Llama-2-7b-chat-hf
34
  parameters:
35
  t:
36
  - filter: self_attn
@@ -50,7 +54,7 @@ from transformers import AutoTokenizer
50
  import transformers
51
  import torch
52
 
53
- model = "retiredcarboxyl/CarbonGPT"
54
  messages = [{"role": "user", "content": "What is a large language model?"}]
55
 
56
  tokenizer = AutoTokenizer.from_pretrained(model)
 
1
  ---
2
  tags:
3
  - merge
4
+ - meta-llama/Llama-2-70b-chat-hf
5
+ - mistralai/Mixtral-8x7B-Instruct-v0.1
6
+ - google/gemma-7b
7
+ - tiiuae/falcon-180B
8
  base_model:
9
  - meta-llama/Llama-2-7b-chat-hf
 
10
  language:
11
  - en
12
  ---
13
 
14
+ # Chat2Eco
15
 
16
+ Chat2Eco is a merge of the following models using [Chat2Eco](https://colab.research.google.com/drive/1ECBKKGQeV3OhnpiIaThZEl8XMQaXNoEh?usp=sharing):
17
  * [meta-llama/Llama-2-70b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)
18
  * [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1)
19
  * [google/gemma-7b](https://huggingface.co/google/gemma-7b)
 
24
  ```yaml
25
  slices:
26
  - sources:
27
+ - model: meta-llama/Llama-2-70b-chat-hf
28
  layer_range: [0, 32]
29
+ - model: mistralai/Mixtral-8x7B-Instruct-v0.1
30
  layer_range: [0, 32]
31
+ - model: google/gemma-7b
32
+ layer_range: [0, 64]
33
+ - model: tiiuae/falcon-180B
34
+ layer_range: [0, 64]
35
+
36
  merge_method: slerp
37
+ base_model: meta-llama/Llama-2-70b-chat-hf
38
  parameters:
39
  t:
40
  - filter: self_attn
 
54
  import transformers
55
  import torch
56
 
57
+ model = "retiredcarboxyl/Chat2Eco"
58
  messages = [{"role": "user", "content": "What is a large language model?"}]
59
 
60
  tokenizer = AutoTokenizer.from_pretrained(model)