Celso F's picture

Celso F

celsowm

AI & ML interests

None yet

Recent Activity

updated a dataset 18 days ago
celsowm/leis_ordinarias_1988_2024
updated a dataset about 1 month ago
celsowm/lei_n_15_022_2024
updated a dataset about 1 month ago
celsowm/enunciados_pge_rj_messages
View all activity

Organizations

None yet

celsowm's activity

reacted to singhsidhukuldeep's post with 🔥 2 months ago
view post
Post
1844
Good folks at @nvidia have released exciting new research on normalized Transformers (nGPT) for faster and more efficient language modeling!

Here is what they are proposing:

1. Remove all normalization layers, like RMSNorm or LayerNorm, from the standard Transformer architecture.

2. Normalize all matrices along their embedding dimension after each training step. This includes input and output embeddings, attention matrices (Q, K, V), output projection matrices, and MLP matrices.

3. Replace the standard residual connections with normalized update equations using learnable eigen learning rates for the attention and MLP blocks.

4. Change the softmax scaling factor in the attention mechanism from 1/sqrt of d_k to sqrt of d_k.

5. Implement rescaling and optional normalization of query (q) and key (k) vectors in the attention mechanism using learnable scaling factors.

6. Rescale the intermediate states of the MLP block using learnable scaling factors.

7. Implement rescaling of the output logits using learnable scaling factors.

8. Remove weight decay and learning rate warmup from the optimization process.

9. Initialize the eigen learning rates and scaling factors with appropriate values as specified in the paper.

10. During training, treat all vectors and matrices as residing on a unit hypersphere, interpreting matrix-vector multiplications as cosine similarities.

11. Implement the update equations for the hidden states using the normalized outputs from attention and MLP blocks, controlled by the eigen learning rates.

12. After each forward pass, normalize all parameter matrices to ensure they remain on the unit hypersphere.

13. Use the Adam optimizer without weight decay for training the model.

14. When computing loss, apply the learnable scaling factor to the logits before the softmax operation.

15. During inference, follow the same normalization and scaling procedures as in training.

Excited to see how it scales to larger models and datasets!
New activity in nvidia/NVLM-D-72B 3 months ago
New activity in unsloth/Llama-3.2-3B-Instruct-GGUF 3 months ago

11b instruct gguf?

3
#1 opened 3 months ago by
celsowm
reacted to bartowski's post with ❤️ 3 months ago
view post
Post
34935
Reposting from twitter:

Just so you all know, I'll be on vacation for the following two weeks and away from home! I'm hoping to get on at least once a day to load up some quants, but I won't be as bleeding edge and on the ball :) feel free to shoot me a message if you see one I should make!

In the meantime if you need something bleeding edge make sure to check out @MaziyarPanahi or @bullerwins who both put out great work!
·
New activity in huggingchat/chat-ui 4 months ago

Your feedback on HuggingChat

250
#1 opened over 1 year ago by
victor
New activity in ai21labs/AI21-Jamba-1.5-Mini 4 months ago

GGUF quants version?

1
#1 opened 4 months ago by
celsowm
upvoted an article 4 months ago
view article
Article

Welcome FalconMamba: The first strong attention-free 7B model

108
updated a Space 5 months ago
New activity in internlm/internlm2_5-20b-chat 5 months ago