Quazim0t0 commited on
Commit
70dbaee
·
verified ·
1 Parent(s): 91d7cf2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -52
README.md CHANGED
@@ -1,52 +0,0 @@
1
- ---
2
- tags:
3
- - merge
4
- - mergekit
5
- - lazymergekit
6
- ---
7
-
8
- # graphite-14b-sce
9
-
10
- graphite-14b-sce is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
11
-
12
- ## 🧩 Configuration
13
-
14
- ```yaml
15
- models:
16
- # Pivot model
17
- - model: Quazim0t0/Alien-CoT-14B-sce
18
- # Target models
19
- - model: Quazim0t0/Mithril-14B-sce
20
- - model: Quazim0t0/Phi4Basis-14B-sce
21
- - model: bunnycore/Phi-4-RStock-v0.1
22
- merge_method: sce
23
- base_model: bunnycore/Phi-4-RStock-v0.1
24
- parameters:
25
- select_topk: 1.0
26
- dtype: bfloat16
27
- ```
28
-
29
- ## 💻 Usage
30
-
31
- ```python
32
- !pip install -qU transformers accelerate
33
-
34
- from transformers import AutoTokenizer
35
- import transformers
36
- import torch
37
-
38
- model = "Quazim0t0/graphite-14b-sce"
39
- messages = [{"role": "user", "content": "What is a large language model?"}]
40
-
41
- tokenizer = AutoTokenizer.from_pretrained(model)
42
- prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
43
- pipeline = transformers.pipeline(
44
- "text-generation",
45
- model=model,
46
- torch_dtype=torch.float16,
47
- device_map="auto",
48
- )
49
-
50
- outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
51
- print(outputs[0]["generated_text"])
52
- ```