woodchen7 commited on
Commit
423c269
·
verified ·
1 Parent(s): 2f4bc43

Upload configuration_hunyuan.py with huggingface_hub

Browse files
Files changed (1) hide show
  1. configuration_hunyuan.py +243 -0
configuration_hunyuan.py ADDED
@@ -0,0 +1,243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (C) 2024 THL A29 Limited, a Tencent company. All rights reserved.
3
+ """ HunYuan model configuration"""
4
+
5
+ from transformers.configuration_utils import PretrainedConfig
6
+ from transformers.utils import logging
7
+ from typing import List, Union, Optional
8
+
9
+
10
+ logger = logging.get_logger(__name__)
11
+
12
+
13
+ class HunYuanConfig(PretrainedConfig):
14
+ r"""
15
+ This is the configuration class to store the configuration of a [`HunYuanModel`]. It is used to instantiate an
16
+ HunYuan model according to the specified arguments, defining the model architecture. Instantiating a configuration
17
+ with the defaults will yield a similar configuration to that of the HunYuan-7B.
18
+
19
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
20
+ documentation from [`PretrainedConfig`] for more information.
21
+
22
+
23
+ Args:
24
+ vocab_size (`int`, *optional*, defaults to 32000):
25
+ Vocabulary size of the HunYuan model. Defines the number of different tokens that can be represented by the
26
+ `inputs_ids` passed when calling [`HunYuanModel`]
27
+ hidden_size (`int`, *optional*, defaults to 4096):
28
+ Dimension of the hidden representations.
29
+ intermediate_size (`int`, *optional*, defaults to 11008):
30
+ Dimension of the MLP representations or shared MLP representations.
31
+ moe_intermediate_size (`int` or `List`, *optional*, defaults to 11008):
32
+ Dimension of the MLP representations in MoE. Use a list if you want a different size per layer.
33
+ num_hidden_layers (`int`, *optional*, defaults to 32):
34
+ Number of hidden layers in the Transformer decoder.
35
+ num_attention_heads (`int`, *optional*, defaults to 32):
36
+ Number of attention heads for each attention layer in the Transformer decoder.
37
+ num_key_value_heads (`int`, *optional*):
38
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
39
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
40
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
41
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
42
+ by meanpooling all the original heads within that group. For more details checkout [this
43
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
44
+ `num_attention_heads`.
45
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
46
+ The non-linear activation function (function or string) in the decoder.
47
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
48
+ The maximum sequence length that this model might ever be used with.
49
+ initializer_range (`float`, *optional*, defaults to 0.02):
50
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
51
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
52
+ The epsilon used by the rms normalization layers.
53
+ use_cache (`bool`, *optional*, defaults to `True`):
54
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
55
+ relevant if `config.is_decoder=True`.
56
+ pad_token_id (`int`, *optional*):
57
+ Padding token id.
58
+ bos_token_id (`int`, *optional*, defaults to 1):
59
+ Beginning of stream token id.
60
+ eos_token_id (`int`, *optional*, defaults to 2):
61
+ End of stream token id.
62
+ pretraining_tp (`int`, *optional*, defaults to 1):
63
+ Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
64
+ document](https://huggingface.co/docs/transformers/parallelism) to understand more about it. This value is
65
+ necessary to ensure exact reproducibility of the pretraining results. Please refer to [this
66
+ issue](https://github.com/pytorch/pytorch/issues/76232).
67
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
68
+ Whether to tie weight embeddings
69
+ rope_theta (`float`, *optional*, defaults to 10000.0):
70
+ The base period of the RoPE embeddings.
71
+ rope_scaling (`Dict`, *optional*):
72
+ Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
73
+ strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
74
+ `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
75
+ `max_position_embeddings` to the expected new maximum. See the following thread for more information on how
76
+ these scaling strategies behave:
77
+ https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
78
+ experimental feature, subject to breaking API changes in future versions.
79
+ attention_bias (`bool`, defaults to `False`, *optional*, defaults to `False`):
80
+ Whether to use a bias in the query, key, value and output projection layers during self-attention.
81
+ attention_dropout (`float`, *optional*, defaults to 0.0):
82
+ The dropout ratio for the attention probabilities.
83
+ use_qk_norm (`bool`, *optional*, defaults to `False`):
84
+ Whether query and key in attention use norm
85
+ use_cla (`bool`, *optional*, defaults to `False`):
86
+ Whether to use CLA in attention
87
+ cla_share_factor (`int`, *optional*, defaults to 1):
88
+ The share factor of CLA
89
+ num_experts (`int` or `List`, *optional*, defaults to 1):
90
+ The number of experts for moe. If it is a list, it will be used as the number of experts for each layer.
91
+ num_shared_expert (`int` or `List`, *optional*, defaults to 1):
92
+ The number of shared experts for moe. If it is a list, it will be used as the number of shared experts for each layer.
93
+ moe_topk (`int` or `List`, *optional*, defaults to 1):
94
+ The topk value for moe. If it is a list, it will be used as the topk value for each layer.
95
+ capacity_factor (Not used) (`float` or `List`, *optional*, defaults to 1.0):
96
+ The capacity factor for moe. If it is a list, it will be used as the capacity factor for each layer.
97
+ moe_layer_num_skipped (`int`, *optional*, defaults to 0):
98
+ First moe_layer_num_skipped layers do not use MoE.
99
+ """
100
+
101
+ model_type = "hunyuan"
102
+ keys_to_ignore_at_inference = ["past_key_values"]
103
+
104
+ def __init__(
105
+ self,
106
+ vocab_size=290943,
107
+ hidden_size=4096,
108
+ intermediate_size: int=11008,
109
+ moe_intermediate_size: Union[int, List]=None,
110
+ num_hidden_layers=32,
111
+ num_attention_heads=32,
112
+ num_key_value_heads=None,
113
+ attention_head_dim=None,
114
+ hidden_act="silu",
115
+ max_position_embeddings=2048,
116
+ initializer_range=0.02,
117
+ rms_norm_eps=1e-5,
118
+ use_cache=True,
119
+ pad_token_id=0,
120
+ bos_token_id=1,
121
+ eos_token_id=2,
122
+ pretraining_tp=1,
123
+ tie_word_embeddings=False,
124
+ rope_theta=10000.0,
125
+ rope_scaling=None,
126
+ attention_bias=False,
127
+ mlp_bias=False,
128
+ attention_dropout=0.0,
129
+ use_qk_norm=False,
130
+ use_cla=False,
131
+ cla_share_factor=1,
132
+ num_experts: Union[int, List]=1,
133
+ use_mixed_mlp_moe=False,
134
+ num_shared_expert: Union[int, List]=1,
135
+ moe_topk: Union[int, List]=1,
136
+ # capacity_factor: Union[int, List]=1.0,
137
+ moe_drop_tokens=False,
138
+ moe_random_routing_dropped_token=False,
139
+ use_mla=False,
140
+ kv_lora_rank=512,
141
+ q_lora_rank=1536,
142
+ qk_rope_head_dim=64,
143
+ v_head_dim=128,
144
+ qk_nope_head_dim=128,
145
+ moe_layer_num_skipped=0,
146
+ norm_topk_prob=False,
147
+ routed_scaling_factor=1.0,
148
+ group_limited_greedy=False,
149
+ n_group=None,
150
+ topk_group=None,
151
+ **kwargs,
152
+ ):
153
+ self.vocab_size = vocab_size
154
+ self.max_position_embeddings = max_position_embeddings
155
+ self.hidden_size = hidden_size
156
+ self.intermediate_size = intermediate_size
157
+ self.moe_intermediate_size = moe_intermediate_size
158
+ self.num_hidden_layers = num_hidden_layers
159
+ self.num_attention_heads = num_attention_heads
160
+ self.num_experts = num_experts
161
+ self.use_mixed_mlp_moe = use_mixed_mlp_moe
162
+ self.num_shared_expert = num_shared_expert
163
+ self.moe_topk = moe_topk
164
+ # self.capacity_factor = capacity_factor
165
+ self.moe_drop_tokens = moe_drop_tokens
166
+ self.moe_random_routing_dropped_token = moe_random_routing_dropped_token
167
+
168
+ if attention_head_dim is not None:
169
+ self.attention_head_dim = attention_head_dim
170
+ else:
171
+ self.attention_head_dim = self.hidden_size // num_attention_heads
172
+
173
+ # for backward compatibility
174
+ if num_key_value_heads is None:
175
+ num_key_value_heads = num_attention_heads
176
+
177
+ self.num_key_value_heads = num_key_value_heads
178
+ self.hidden_act = hidden_act
179
+ self.initializer_range = initializer_range
180
+ self.rms_norm_eps = rms_norm_eps
181
+ self.pretraining_tp = pretraining_tp
182
+ self.use_cache = use_cache
183
+ self.rope_theta = rope_theta
184
+ self.rope_scaling = rope_scaling
185
+ # self._rope_scaling_validation() # TODO: Need validation?
186
+ self.attention_bias = attention_bias
187
+ self.mlp_bias = mlp_bias
188
+ self.attention_dropout = attention_dropout
189
+ self.use_qk_norm = use_qk_norm
190
+ self.use_cla = use_cla
191
+ self.cla_share_factor = cla_share_factor
192
+
193
+ # MLA args
194
+ self.use_mla = use_mla
195
+ self.kv_lora_rank = kv_lora_rank
196
+ self.q_lora_rank = q_lora_rank
197
+ self.qk_rope_head_dim = qk_rope_head_dim
198
+ self.qk_nope_head_dim = qk_nope_head_dim
199
+ self.v_head_dim = v_head_dim
200
+
201
+ # DeepSeek related args
202
+ self.moe_layer_num_skipped = moe_layer_num_skipped
203
+ self.norm_topk_prob = norm_topk_prob
204
+ self.routed_scaling_factor = routed_scaling_factor
205
+ self.group_limited_greedy = group_limited_greedy
206
+ self.n_group = n_group
207
+ self.topk_group = topk_group
208
+
209
+ super().__init__(
210
+ pad_token_id=pad_token_id,
211
+ bos_token_id=bos_token_id,
212
+ eos_token_id=eos_token_id,
213
+ tie_word_embeddings=tie_word_embeddings,
214
+ **kwargs,
215
+ )
216
+
217
+ def _rope_scaling_validation(self):
218
+ """
219
+ Validate the `rope_scaling` configuration.
220
+ """
221
+ if self.rope_scaling is None:
222
+ return
223
+
224
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
225
+ raise ValueError(
226
+ "`rope_scaling` must be a dictionary with with two fields, `type` and `factor` or `type` and `alpha`, "
227
+ f"got {self.rope_scaling}"
228
+ )
229
+ rope_scaling_type = self.rope_scaling.get("type", None)
230
+ rope_scaling_factor = self.rope_scaling.get("factor", None)
231
+ rope_scaling_alpha = self.rope_scaling.get("alpha", None)
232
+ if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic"]:
233
+ raise ValueError(
234
+ f"`rope_scaling`'s type field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
235
+ )
236
+ if rope_scaling_factor is None and rope_scaling_alpha is None:
237
+ raise ValueError("`rope_scaling`'s factor or alpha field must be have one, got both of none")
238
+ if rope_scaling_factor is not None:
239
+ if not isinstance(rope_scaling_factor, float) or rope_scaling_factor <= 1.0:
240
+ raise ValueError(f"`rope_scaling`'s factor field must be a float > 1.0, got {rope_scaling_factor}")
241
+ if rope_scaling_alpha is not None:
242
+ if not isinstance(rope_scaling_alpha, float) or rope_scaling_alpha <= 1.0:
243
+ raise ValueError(f"`rope_scaling`'s alpha field must be a float > 1.0, got {rope_scaling_alpha}")