A very tiny 33.5M Llama3 model trained on a Macbook Pro with M3 Max for 10 hours.
Complete training code can be found at https://github.com/frost-beta/train-japanese-llama3-js.
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The HF Inference API does not support text-generation models for mlx
library.