Or4cl3-1 commited on
Commit
018ea24
·
verified ·
1 Parent(s): e41faee

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +65 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - merge
4
+ - mergekit
5
+ - lazymergekit
6
+ - Or4cl3-1/cognitive-agent-xtts-optimized
7
+ - Or4cl3-1/multimodal-fusion-optimized
8
+ base_model:
9
+ - Or4cl3-1/cognitive-agent-xtts-optimized
10
+ - Or4cl3-1/multimodal-fusion-optimized
11
+ ---
12
+
13
+ # CogniFusion-XTTS-slerp
14
+
15
+ CogniFusion-XTTS-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
+ * [Or4cl3-1/cognitive-agent-xtts-optimized](https://huggingface.co/Or4cl3-1/cognitive-agent-xtts-optimized)
17
+ * [Or4cl3-1/multimodal-fusion-optimized](https://huggingface.co/Or4cl3-1/multimodal-fusion-optimized)
18
+
19
+ ## 🧩 Configuration
20
+
21
+ ```yaml
22
+ slices:
23
+ - sources:
24
+ - model: Or4cl3-1/cognitive-agent-xtts-optimized
25
+ layer_range: [0, 32] # Specify appropriate layer range for cognitive agent
26
+ - model: Or4cl3-1/multimodal-fusion-optimized
27
+ layer_range: [0, 32] # Specify appropriate layer range for multimodal fusion
28
+ merge_method: slerp
29
+ base_model: Or4cl3-1/cognitive-agent-xtts-optimized
30
+ parameters:
31
+ t:
32
+ - filter: self_attn
33
+ value: [0, 0.5, 0.3, 0.7, 1] # Fine-tune self-attention parameters
34
+ - filter: mlp
35
+ value: [1, 0.5, 0.7, 0.3, 0] # Adjust MLP parameters for optimal fusion
36
+ - value: 0.5 # Set overall fusion parameter value
37
+ dtype: bfloat16
38
+
39
+ # Add ethical considerations and any additional optimization parameters here
40
+ ```
41
+
42
+ ## 💻 Usage
43
+
44
+ ```python
45
+ !pip install -qU transformers accelerate
46
+
47
+ from transformers import AutoTokenizer
48
+ import transformers
49
+ import torch
50
+
51
+ model = "Or4cl3-1/CogniFusion-XTTS-slerp"
52
+ messages = [{"role": "user", "content": "What is a large language model?"}]
53
+
54
+ tokenizer = AutoTokenizer.from_pretrained(model)
55
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
56
+ pipeline = transformers.pipeline(
57
+ "text-generation",
58
+ model=model,
59
+ torch_dtype=torch.float16,
60
+ device_map="auto",
61
+ )
62
+
63
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
64
+ print(outputs[0]["generated_text"])
65
+ ```