File size: 8,773 Bytes
5320de9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


code-millenials-13b - GGUF
- Model creator: https://huggingface.co/budecosystem/
- Original model: https://huggingface.co/budecosystem/code-millenials-13b/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [code-millenials-13b.Q2_K.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q2_K.gguf) | Q2_K | 4.52GB |
| [code-millenials-13b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.IQ3_XS.gguf) | IQ3_XS | 4.99GB |
| [code-millenials-13b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.IQ3_S.gguf) | IQ3_S | 5.27GB |
| [code-millenials-13b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q3_K_S.gguf) | Q3_K_S | 5.27GB |
| [code-millenials-13b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.IQ3_M.gguf) | IQ3_M | 5.57GB |
| [code-millenials-13b.Q3_K.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q3_K.gguf) | Q3_K | 5.9GB |
| [code-millenials-13b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q3_K_M.gguf) | Q3_K_M | 5.9GB |
| [code-millenials-13b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q3_K_L.gguf) | Q3_K_L | 6.45GB |
| [code-millenials-13b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.IQ4_XS.gguf) | IQ4_XS | 6.54GB |
| [code-millenials-13b.Q4_0.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q4_0.gguf) | Q4_0 | 6.86GB |
| [code-millenials-13b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.IQ4_NL.gguf) | IQ4_NL | 6.9GB |
| [code-millenials-13b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q4_K_S.gguf) | Q4_K_S | 6.91GB |
| [code-millenials-13b.Q4_K.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q4_K.gguf) | Q4_K | 7.33GB |
| [code-millenials-13b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q4_K_M.gguf) | Q4_K_M | 7.33GB |
| [code-millenials-13b.Q4_1.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q4_1.gguf) | Q4_1 | 7.61GB |
| [code-millenials-13b.Q5_0.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q5_0.gguf) | Q5_0 | 8.36GB |
| [code-millenials-13b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q5_K_S.gguf) | Q5_K_S | 8.36GB |
| [code-millenials-13b.Q5_K.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q5_K.gguf) | Q5_K | 8.6GB |
| [code-millenials-13b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q5_K_M.gguf) | Q5_K_M | 8.6GB |
| [code-millenials-13b.Q5_1.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q5_1.gguf) | Q5_1 | 9.1GB |
| [code-millenials-13b.Q6_K.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q6_K.gguf) | Q6_K | 9.95GB |
| [code-millenials-13b.Q8_0.gguf](https://huggingface.co/RichardErkhov/budecosystem_-_code-millenials-13b-gguf/blob/main/code-millenials-13b.Q8_0.gguf) | Q8_0 | 12.88GB |




Original model description:
---
license: llama2
library_name: transformers
tags:
- code
model-index:
- name: Code Millenials
  results:
  - task:
      type: text-generation
    dataset:
      type: openai_humaneval
      name: HumanEval
    metrics:
    - name: pass@1
      type: pass@1
      value: 0.7621
      verified: false
---


# Bud Code Millenials 13B

Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected]
### News 🔥🔥🔥

- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).


### HumanEval

<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>

For the millenial models, the eval script in the github repo is used for the above result.

Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc. 


### Models

|   Model | Checkpoint  | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |




### 🚀 Quick Start

Inference code  using the pre-trained model from the Hugging Face model hub

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-13b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-13b")

template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction} ### Response:"""

instruction = <Your code instruction here>

prompt = template.format(instruction=instruction)

inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))

```


## Training details

The model is trained of 8 A100 80GB for approximately 15hrs. 

| Hyperparameters              | Value  |
| :----------------------------| :-----: |
| per_device_train_batch_size  | 2      |
| gradient_accumulation_steps  | 1      |
| epoch | 3 |
| steps | 34503 |
| learning_rate                | 2e-5   |
| lr schedular type | cosine |
| warmup ratio | 0.1 |
| optimizer                    | adamw  |
| fp16                         | True   |
| GPU                          | 8 A100 80GB |

### Important Note

- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding.