reshinthadith commited on
Commit
b2ce6ac
1 Parent(s): 953b167

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - bigcode/starcoderdata
4
+ language:
5
+ - code
6
+ tags:
7
+ - causal-lm
8
+ license: cc-by-sa-4.0
9
+ ---
10
+ # `StableCode-Completion-Alpha-3B`
11
+
12
+ ## Model Description
13
+
14
+ `StableCode-Completion-Alpha-3B` is a 3 billion parameter decoder-only code completion model pre-trained on diverse set of programming languages that topped the stackoverflow developer survey. ## Usage
15
+ The model is intended to do single/multiline code completion from a long context window upto 16k tokens.
16
+ Get started generating code with `StableCode-Completion-Alpha-3B` by using the following code snippet:
17
+
18
+ ```python
19
+ from transformers import AutoModelForCausalLM, AutoTokenizer
20
+ tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablecode-completion-alpha-3b")
21
+ model = AutoModelForCausalLM.from_pretrained(
22
+ "stabilityai/stablelm-base-alpha-3b-v2",
23
+ trust_remote_code=True,
24
+ torch_dtype="auto",
25
+ )
26
+ model.cuda()
27
+ inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to("cuda")
28
+ tokens = model.generate(
29
+ **inputs,
30
+ max_new_tokens=48,
31
+ temperature=0.2,
32
+ do_sample=True,
33
+ )
34
+ print(tokenizer.decode(tokens[0], skip_special_tokens=True))
35
+ ```
36
+
37
+ ## Model Details
38
+
39
+ * **Developed by**: Code.AI Team @ [Stability AI](https://stability.ai/)
40
+ * **Model type**: `StableCode-Completion-Alpha-3B` models are auto-regressive language models based on the transformer decoder architecture.
41
+ * **Language(s)**: English, Code
42
+ * **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
43
+ * **License**: Model checkpoints are licensed under the Creative Commons license ([CC BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/)). Under this license, you must give [credit](https://creativecommons.org/licenses/by/4.0/#) to Stability AI, provide a link to the license, and [indicate if changes were made](https://creativecommons.org/licenses/by/4.0/#). You may do so in any reasonable manner, but not in any way that suggests the Stability AI endorses you or your use.
44
+ * **Contact**: For questions and comments about the model, please email `[email protected]`
45
+
46
+ ### Model Architecture
47
+
48
+ | Parameters | Hidden Size | Layers | Heads | Sequence Length |
49
+ |----------------|-------------|--------|-------|-----------------|
50
+ | 2,796,431,360 | 2560 | 32 | 32 | 16384 |
51
+
52
+
53
+ * **Decoder Layer**: Parallel Attention and MLP residuals with a single input LayerNorm ([Wang & Komatsuzaki, 2021](https://github.com/kingoflolz/mesh-transformer-jax/tree/master))
54
+ * **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864))
55
+ * **Bias**: LayerNorm bias terms only
56
+
57
+ ## Training
58
+
59
+ `StableCode-Completion-Alpha-3B` is pre-trained using a multi-stage context length extension schedule following similar work ([Nijkamp et al. 2023](https://blog.salesforceairesearch.com/xgen/)); first pre-training at a context length of 4096 for 300 billion tokens, then fine-tuning at a context length of 16384 for another 200B tokens.
60
+
61
+ ### Training Dataset
62
+
63
+ The first pre-training stage relies on 300B tokens sourced from various top programming languages occuring in the stackoverflow developer survey. We then finetune it on a longer context augmentation of `starcoder-data.
64
+
65
+ ### Training Procedure
66
+
67
+ The model is pre-trained on the dataset mixes mentioned above in mixed-precision BF16), optimized with AdamW, and trained using the NeoX tokenizer with a vocabulary size of 49k.
68
+
69
+ * **Software**: We use a fork of gpt-neox ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)) and train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)) and rely on flash-attention as well as rotary embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
70
+
71
+ ## Use and Limitations
72
+
73
+ ### Intended Use
74
+
75
+
76
+ ### Limitations and bias
77
+