TinyLlama-1.1B

https://github.com/jzhang38/TinyLlama

The TinyLlama project aims to pretrain a 1.1B Llama model on 3 trillion tokens. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs πŸš€πŸš€. The training has started on 2023-09-01.

We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.

This Model

This is a code LM finetuned(or so-called continue pretrianed) from the 500B TinyLlama checkpoint with another 7B Python data from the starcoderdata.

While the finetuning data is exclusively Python, the model retains its ability in many other languages such as C or Java.

The HumanEval accuracy is 14.

It can be used as the draft model to speculative-decode larger models such as models in the CodeLlama family.

Downloads last month
959
GGUF
Model size
1.1B params
Architecture
llama

4-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for TinyLlama/TinyLlama-1.1B-python-v0.1

Quantizations
3 models

Datasets used to train TinyLlama/TinyLlama-1.1B-python-v0.1

Space using TinyLlama/TinyLlama-1.1B-python-v0.1 1