Vel Yanchina

velyan
·

AI & ML interests

None yet

Recent Activity

Organizations

Hugging Face Discord Community's profile picture

velyan's activity

New activity in meta-llama/Llama-3.2-3B-Instruct-QLORA_INT4_EO8 about 2 months ago

Upload config.json

1
#4 opened about 2 months ago by
velyan
New activity in ggml-org/gguf-my-repo about 2 months ago

Update app.py

2
#132 opened about 2 months ago by
velyan
updated a Space about 2 months ago
liked a Space about 2 months ago
upvoted an article 3 months ago
view article
Article

Llama can now see and run on your device - welcome Llama 3.2

180
upvoted an article 5 months ago
view article
Article

Releasing Swift Transformers: Run On-Device LLMs in Apple Devices

26
reacted to pcuenq's post with 🔥 5 months ago
view post
Post
4650
OpenELM in Core ML

Apple recently released a set of efficient LLMs in sizes varying between 270M and 3B parameters. Their quality, according to benchmarks, is similar to OLMo models of comparable size, but they required half the pre-training tokens because they use layer-wise scaling, where the number of attention heads increases in deeper layers.

I converted these models to Core ML, for use on Apple Silicon, using this script: https://gist.github.com/pcuenca/23cd08443460bc90854e2a6f0f575084. The converted models were uploaded to this community in the Hub for anyone that wants to integrate inside their apps: corenet-community/openelm-core-ml-6630c6b19268a5d878cfd194

The conversion was done with the following parameters:
- Precision: float32.
- Sequence length: fixed to 128.

With swift-transformers (https://github.com/huggingface/swift-transformers), I'm getting about 56 tok/s with the 270M on my M1 Max, and 6.5 with the largest 3B model. These speeds could be improved by converting to float16. However, there's some precision loss somewhere and generation doesn't work in float16 mode yet. I'm looking into this and will keep you posted! Or take a look at this issue if you'd like to help: https://github.com/huggingface/swift-transformers/issues/95

I'm also looking at optimizing inference using an experimental kv cache in swift-transformers. It's a bit tricky because the layers have varying number of attention heads, but I'm curious to see how much this feature can accelerate performance in this model family :)

Regarding the instruct fine-tuned models, I don't know the chat template that was used. The models use the Llama 2 tokenizer, but the Llama 2 chat template, or the default Alignment Handbook one that was used to train, are not recognized. Any ideas on this welcome!
·