djuna commited on
Commit
784e6a3
1 Parent(s): 36c5db3
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -13,9 +13,9 @@ tags:
13
  - macadeliccc/Samantha-Qwen2-1.5B
14
  ---
15
 
16
- # Qwen2-4x1.5B
17
 
18
- Qwen2-4x1.5B is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
19
  * [cognitivecomputations/dolphin-2.9.3-qwen2-1.5b](https://huggingface.co/cognitivecomputations/dolphin-2.9.3-qwen2-1.5b)
20
  * [macadeliccc/Samantha-Qwen2-1.5B](https://huggingface.co/macadeliccc/Samantha-Qwen2-1.5B)
21
 
@@ -68,7 +68,7 @@ from transformers import AutoTokenizer
68
  import transformers
69
  import torch
70
 
71
- model = "djuna/Qwen2-4x1.5B"
72
 
73
  tokenizer = AutoTokenizer.from_pretrained(model)
74
  pipeline = transformers.pipeline(
 
13
  - macadeliccc/Samantha-Qwen2-1.5B
14
  ---
15
 
16
+ # Qwen2-2x1.5B
17
 
18
+ Qwen2-2x1.5B is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
19
  * [cognitivecomputations/dolphin-2.9.3-qwen2-1.5b](https://huggingface.co/cognitivecomputations/dolphin-2.9.3-qwen2-1.5b)
20
  * [macadeliccc/Samantha-Qwen2-1.5B](https://huggingface.co/macadeliccc/Samantha-Qwen2-1.5B)
21
 
 
68
  import transformers
69
  import torch
70
 
71
+ model = "djuna/Qwen2-2x1.5B"
72
 
73
  tokenizer = AutoTokenizer.from_pretrained(model)
74
  pipeline = transformers.pipeline(