Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ datasets:
|
|
9 |
|
10 |
|
11 |
## Introduction
|
12 |
-
AMD-Llama-135m is a language model trained on AMD MI250
|
13 |
|
14 |
## Model Details
|
15 |
|
@@ -99,7 +99,7 @@ Embedding layers and Linear layers of attention module are randomly initialized
|
|
99 |
We use python split of [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata) dataset to finetune our 135m pretrained model, 20B training tokens. Originally, StarCoder contains 783GB of code in 86 programming languages and includes GitHub Issues, Jupyter notebooks and GitHub commits, which is approximately 250 Billion tokens. We extract the python split of StarCoder to finetune our 135m pretrained model.
|
100 |
|
101 |
### Code Finetuning Detail
|
102 |
-
We take the 135m pretrained model as base model and further finetune on python split of StarCoder datasets for
|
103 |
|
104 |
| Finetuning config | value |
|
105 |
| ---------------------- | ------ |
|
|
|
9 |
|
10 |
|
11 |
## Introduction
|
12 |
+
AMD-Llama-135m is a language model trained on AMD Instinct MI250 accelerators. Based on LLama2 model architecture, this model can be smoothly loaded as LlamaForCausalLM with huggingface transformers. Furthermore, we use the same tokenizer as LLama2, enabling it to be a draft model of speculative decoding for LLama2 and CodeLlama.
|
13 |
|
14 |
## Model Details
|
15 |
|
|
|
99 |
We use python split of [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata) dataset to finetune our 135m pretrained model, 20B training tokens. Originally, StarCoder contains 783GB of code in 86 programming languages and includes GitHub Issues, Jupyter notebooks and GitHub commits, which is approximately 250 Billion tokens. We extract the python split of StarCoder to finetune our 135m pretrained model.
|
100 |
|
101 |
### Code Finetuning Detail
|
102 |
+
We take the 135m pretrained model as base model and further finetune on python split of StarCoder datasets for 1 epoch with batch size of 320.
|
103 |
|
104 |
| Finetuning config | value |
|
105 |
| ---------------------- | ------ |
|