ejbejaranos commited on
Commit
4a74714
β€’
1 Parent(s): b8e0950

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +210 -0
README.md ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ datasets:
5
+ - abideen/Cosmopedia-100k-pretrain
6
+ language:
7
+ - en
8
+ base_model:
9
+ - meta-llama/Llama-3.1-8B-Instruct
10
+ ---
11
+ # πŸš€ Llama3-8B-to2B-BitnetDownscaling (from 8B to 2B) Transformation & Training
12
+
13
+ This project transforms a Llama3 model from 8B parameters to a BitNet architecture with 2B parameters, applying BitLinear layers. Additionally, the model is trained with a predefined dataset and uploaded to Hugging Face for future use.
14
+
15
+
16
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6419c2f6b4adb0e101b17b6c/X6O_WbSqbdOWjhTm0tWU1.png)
17
+
18
+ ## Features 🌈
19
+ - **Model Size:** 8B parameters 🧠
20
+ - **Architecture:** BitNet πŸ—οΈ
21
+ - **Bitlinear Layers:** Reduces weights to values of 1, 0, and -1. βž–
22
+ - **Optimized for:** Fast inference and memory efficiency ⚑
23
+
24
+ ## Architecture
25
+
26
+ ```bash
27
+ LlamaForCausalLM(
28
+ (model): LlamaModel(
29
+ (embed_tokens): Embedding(128256, 4096)
30
+ (layers): ModuleList(
31
+ (0-5): 6 x LlamaDecoderLayer(
32
+ (self_attn): LlamaSdpaAttention(
33
+ (q_proj): BitLinear(in_features=4096, out_features=4096, bias=False)
34
+ (k_proj): BitLinear(in_features=4096, out_features=1024, bias=False)
35
+ (v_proj): BitLinear(in_features=4096, out_features=1024, bias=False)
36
+ (o_proj): BitLinear(in_features=4096, out_features=4096, bias=False)
37
+ (rotary_emb): LlamaRotaryEmbedding()
38
+ )
39
+ (mlp): LlamaMLP(
40
+ (gate_proj): BitLinear(in_features=4096, out_features=14336, bias=False)
41
+ (up_proj): BitLinear(in_features=4096, out_features=14336, bias=False)
42
+ (down_proj): BitLinear(in_features=14336, out_features=4096, bias=False)
43
+ (act_fn): SiLU()
44
+ )
45
+ (input_layernorm): Identity()
46
+ (post_attention_layernorm): LlamaRMSNorm((4096,), eps=1e-05)
47
+ )
48
+ )
49
+ (norm): LlamaRMSNorm((4096,), eps=1e-05)
50
+ (rotary_emb): LlamaRotaryEmbedding()
51
+ )
52
+ (lm_head): Linear(in_features=4096, out_features=128256, bias=False)
53
+ )
54
+ ```
55
+ ---
56
+ ### Model Description
57
+
58
+ <!-- Provide a longer summary of what this model is. -->
59
+
60
+ This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated.
61
+
62
+ - **Developed by:** [email protected] && [email protected]
63
+ - **Funded by [optional]:** ITCL
64
+ - **Model type:** LLama3 8B Tramsformed to Bitnet using Downscaling technique
65
+ - **Language(s) (NLP):** Bitnet
66
+ - **License:** [More Information Needed]
67
+ - **Finetuned from model [optional]:** [More Information Needed]
68
+
69
+
70
+ ## Requirements πŸ“¦
71
+ Make sure you have the following libraries installed:
72
+
73
+ ```bash
74
+ pip install transformers torch huggingface_hub wandb coloredlogs
75
+ ```
76
+
77
+
78
+ You can install these dependencies using pip! πŸŽ‰
79
+
80
+ ## Usage πŸ”
81
+ ### Loading the Model
82
+ To load the model, you can simply run the following code:
83
+
84
+
85
+ Para usar este modelo, puedes cargarlo desde Hugging Face con el siguiente cΓ³digo:
86
+ ```python
87
+ from transformers import AutoModelForCausalLM, AutoTokenizer
88
+ from transformers.models.llama.modeling_llama import *
89
+ import torch
90
+ from torch import nn
91
+ import torch.nn.functional as F
92
+ import coloredlogs
93
+ import logging
94
+
95
+
96
+ coloredlogs.install(level='INFO', fmt='%(asctime)s - %(levelname)s - %(message)s', logger=logging.getLogger())
97
+ logger = logging.getLogger(__name__)
98
+
99
+
100
+
101
+
102
+ HF_TOKEN = "you_api_key_here"
103
+
104
+ model = "ejbejaranos/Llama3-8B-ITCL-Bitnet1.6B"
105
+
106
+ # Load a pretrained BitNet model
107
+ tokenizer = AutoTokenizer.from_pretrained(model)
108
+
109
+ model = AutoModelForCausalLM.from_pretrained(
110
+ model,
111
+ token=HF_TOKEN
112
+ )
113
+
114
+ # Establece el pad_token_id
115
+ model.config.pad_token_id = tokenizer.eos_token_id
116
+
117
+ def count_parameters(model):
118
+ # Calculate the number of parameters in billions
119
+ num_params = sum(p.numel() for p in model.parameters() if p.requires_grad) / 10**9
120
+ print(f"Model size: {num_params:.3f}B parameters")
121
+ return int(num_params)
122
+
123
+ def activation_quant(x):
124
+ scale = 127.0 / x.abs().max(dim=-1, keepdim=True).values.clamp_(min=1e-5)
125
+ y = (x * scale).round().clamp_(-128, 127)
126
+ y = y / scale
127
+ return y
128
+
129
+ def weight_quant(w):
130
+ scale = 1.0 / w.abs().mean().clamp_(min=1e-5)
131
+ u = (w * scale).round().clamp_(-1, 1)
132
+ u = u / scale
133
+ return u
134
+
135
+ class BitLinear(nn.Linear):
136
+ def forward(self, x):
137
+ w = self.weight # a weight tensor with shape [d, k]
138
+ x = x.to(w.device)
139
+ RMSNorm = LlamaRMSNorm(x.shape[-1]).to(w.device)
140
+ x_norm = RMSNorm(x)
141
+ x_quant = x_norm + (activation_quant(x_norm) - x_norm).detach()
142
+ w_quant = w + (weight_quant(w) - w).detach()
143
+ y = F.linear(x_quant, w_quant)
144
+ return y
145
+
146
+ def convert_to_bitnet(model, copy_weights):
147
+ for name, module in model.named_modules():
148
+ if isinstance(module, LlamaSdpaAttention) or isinstance(module, LlamaMLP):
149
+ for child_name, child_module in module.named_children():
150
+ if isinstance(child_module, nn.Linear):
151
+ bitlinear = BitLinear(child_module.in_features, child_module.out_features, child_module.bias is not None).to(device="cuda:0")
152
+ if copy_weights:
153
+ bitlinear.weight = child_module.weight
154
+ if child_module.bias is not None:
155
+ bitlinear.bias = child_module.bias
156
+ setattr(module, child_name, bitlinear)
157
+ elif isinstance(module, LlamaDecoderLayer):
158
+ for child_name, child_module in module.named_children():
159
+ if isinstance(child_module, LlamaRMSNorm) and child_name == "input_layernorm":
160
+ setattr(module, child_name, nn.Identity().to(device="cuda:0"))
161
+
162
+ convert_to_bitnet(model, copy_weights=True)
163
+ model.to(device="cuda:0")
164
+
165
+
166
+ logger.info(f"πŸ”’ Number of parameters in the model after extracting weights: {count_parameters(model)}")
167
+ logger.info(f"πŸ“ Reduced model structure:\n{model}")
168
+
169
+
170
+
171
+
172
+
173
+ prompt = "What is the color of sky?"
174
+ inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True).to(model.device)
175
+ inputs['attention_mask'] = inputs['input_ids'] != model.config.pad_token_id
176
+
177
+ generate_ids = model.generate(inputs.input_ids, attention_mask=inputs['attention_mask'], max_length=250)
178
+ decoded_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
179
+
180
+ print(decoded_output[0]) # Print the generated response
181
+
182
+
183
+ ```
184
+
185
+
186
+ ### Performing Inference
187
+ Generate text using the model to unleash its power! πŸ’¬βœ¨
188
+
189
+ ```text
190
+ - What role does explainability play in your AI solutions?
191
+
192
+ How can you ensure that your AI system is able to accurately predict and respond to user inputs?
193
+ These are some of the questions that AI developers have been asking themselves in the last few years.
194
+ In this section, we will explore some of the key concepts and techniques that AI developers have used to develop in their AI systems.
195
+
196
+ First, let's consider the importance of understanding the role of AI in AI.
197
+ AI systems can be incredibly powerful tools for automating tasks, analyzing data, and identifying patterns.
198
+ They can analyze large datasets and identify patterns, trends, and anomalies that might be missed by human analysts.
199
+ By analyzing large datasets, AI can help identify patterns and trends that might otherwise go unnoticed.
200
+
201
+ One of the most significant challenges in AI development is the lack of transparency and accountability.
202
+ With AI systems becoming increasingly sophisticated, there is a growing need for transparency and accountability in AI development.
203
+ This means that there is a growing need for transparency and accountability in AI development.
204
+ However, as AI becomes more sophisticated, it can also lead to unintended consequences, such as job loss or reputational damage.
205
+ ```
206
+
207
+ ## Contact πŸ“«
208
+ For questions or suggestions, feel free to reach out to me:
209
+ - **Email:** [email protected]
210
+ - **GitHub:** [ejbejaranos](https://github.com/ejbejaranos) 🌐