Text Generation
GGUF
English
Inference Endpoints
munish0838 commited on
Commit
de61a87
·
verified ·
1 Parent(s): 32172f0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +126 -0
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+ datasets:
6
+ - PrimeIntellect/fineweb-edu
7
+ - PrimeIntellect/fineweb
8
+ - PrimeIntellect/StackV1-popular
9
+ - mlfoundations/dclm-baseline-1.0-parquet
10
+ - open-web-math/open-web-math
11
+ language:
12
+ - en
13
+ pipeline_tag: text-generation
14
+
15
+ ---
16
+
17
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
18
+
19
+
20
+ # QuantFactory/INTELLECT-1-GGUF
21
+ This is quantized version of [PrimeIntellect/INTELLECT-1](https://huggingface.co/PrimeIntellect/INTELLECT-1) created using llama.cpp
22
+
23
+ # Original Model Card
24
+
25
+ # INTELLECT-1
26
+
27
+ ## **Model Overview**
28
+ **INTELLECT-1** is the first collaboratively trained 10 billion parameter language model trained from scratch on 1 trillion tokens of English text and code.
29
+
30
+ ![Intellect 1 training visual](intellect-1-map.png)
31
+
32
+ This is a base model. Please use the [INTELLECT-1-Instruct](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct) for chat use case.
33
+
34
+ **INTELLECT-1** was trained on up to 14 concurrent nodes distributed across 3 continents, with contributions from 30 independent community contributors providing compute.
35
+ The training code utilizes the [prime framework](https://github.com/PrimeIntellect-ai/prime), a scalable distributed training framework designed for fault-tolerant, dynamically scaling, high-perfomance training on unreliable, globally distributed workers.
36
+ The key abstraction that allows dynamic scaling is the `ElasticDeviceMesh` which manages dynamic global process groups for fault-tolerant communication across the internet and local process groups for communication within a node.
37
+ The model was trained using the [DiLoCo](https://arxiv.org/abs/2311.08105) algorithms with 100 inner steps. The global all-reduce was done with custom int8 all-reduce kernels to reduce the communication payload required, greatly reducing the communication overhead by a factor 400x.
38
+
39
+ For more detailed technical insights, please refer to our [technical paper](https://github.com/PrimeIntellect-ai/prime).
40
+
41
+ **Note: You must add a BOS token at the beginning of each sample. Performance may be impacted otherwise.**
42
+
43
+ ## Usage
44
+ ```python
45
+ import torch
46
+ from transformers import AutoModelForCausalLM, AutoTokenizer
47
+
48
+ torch.set_default_device("cuda")
49
+ model = AutoModelForCausalLM.from_pretrained("PrimeIntellect/INTELLECT-1")
50
+ tokenizer = AutoTokenizer.from_pretrained("PrimeIntellect/INTELLECT-1")
51
+
52
+ input_text = "What is the Metamorphosis of Prime Intellect about?"
53
+ input_ids = tokenizer.encode(input_text, return_tensors="pt")
54
+ output_ids = model.generate(input_ids, max_length=50, num_return_sequences=1)
55
+ output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True)
56
+
57
+ print(output_text)
58
+ ```
59
+
60
+ ### Example text generation pipeline
61
+ ```python
62
+ import torch
63
+ from transformers import pipeline
64
+ torch.set_default_device("cuda")
65
+
66
+ pipe = pipeline("text-generation", model="PrimeIntellect/INTELLECT-1")
67
+ print(pipe("What is prime intellect ?"))
68
+ ```
69
+
70
+ ## **Model Details**
71
+ - **Compute Contributors**: Prime Intellect, Arcee AI, kotaro, skre_0, marlo, rodeo, Herb, Olas, superchillen, Hugging Face, mev_pete, 0xfr_, dj, primeprimeint1234, Marco Giglio, realtek, Hyperbolic, hecataeus, NWO, Virtual Machine, droll, SemiAnalysis, _waiting__, toptickcrypto, sto, Johannes, washout_segment_0b, klee
72
+ - **Release Date**: 29 Nov 2024
73
+ - **Model License**: Apache 2.0
74
+
75
+ ## **Technical Specifications**
76
+ | **Parameter** | **Value** |
77
+ |----------------------|------------------------|
78
+ | Parameter Size | 10B |
79
+ | Number of Layers | 42 |
80
+ | Number of Attention Heads | 32 |
81
+ | Hidden Size | 4096 |
82
+ | Context Length | 8192 |
83
+ | Vocabulary Size | 128256 |
84
+
85
+ **Training Details**:
86
+ - **Dataset**: 55% fineweb-edu, 10% fineweb, 20% Stack V1, 10% dclm-baseline, 5% open-web-math
87
+ - **Tokens**: 1 Trillion
88
+ - **Optimizer**: Diloco/LocalSGD - Inner Optimizer: AdamW, Outer Optmizer: Nesterov SGD
89
+
90
+
91
+ **Performance on benchmarks**
92
+
93
+
94
+ Base Models:
95
+ | Model | Size | Tokens | MMLU | GPQA | GSM8K | ARC-C | Hellaswag |
96
+ |---|---|---|---|---|---|---|---|
97
+ | INTELLECT | 10B | 1T | 37.5 | 26.12 | 8.1 | 52.13 | 72.26 |
98
+ | MPT-7B | 7B | 1T | 26.8 | 25.67 | 8.3 | 46.67 | 77.41 |
99
+ | Falcon-7B | 7B | 1.5T | 26.2 | 23.66 | 4.9 | 47.61 | 78.23 |
100
+ | Pythia-12B | 12B | 300B | 26.5 | 24.33 | 4.09 | 40.61 | 68.83 |
101
+ | LLM360-Amber | 7B | 1.3T | 24.5 | 27.01 | 4.32 | 42.75 | 74.08 |
102
+ | LLaMA-7B | 7B | 1T | 35.1 | 23.21 | 9.7 | 50.43 | 78.19 |
103
+ | LLaMA-13B | 13B | 1T | 46.9 | 26.34 | 17.3 | 56.14 | 81.05 |
104
+ | LLaMA2-7B | 7B | 2T | 45.3 | 25.89 | 13.5 | 54.10 | 78.64 |
105
+ | LLaMA2-13B | 13B | 2T | 54.8 | 25.67 | 24.3 | 59.81 | 82.58 |
106
+
107
+ [Instruction-Tuned Models](https://huggingface.co/PrimeIntellect/INTELLECT-1-Instruct):
108
+ | Model | Size | Tokens | MMLU | GPQA | GSM8K | ARC-C | Hellaswag |
109
+ |---|---|---|---|---|---|---|---|
110
+ | INTELLECT-Instruct | 10B | 1T | 49.89 | 28.32 | 38.58 | 54.52 | 71.42 |
111
+ | MPT-7B-Chat | 7B | 1T | 36.29 | 26.79 | 8.26 | 51.02 | 75.88 |
112
+ | Falcon-7B-Instruct | 7B | 1.5T | 25.21 | 26.34 | 4.93 | 45.82 | 70.61 |
113
+ | LLM360-AmberChat | 7B | 1.4T | 36.02 | 27.23 | 6.14 | 43.94 | 73.94 |
114
+ | LLaMA2-7B-Chat | 7B | 2T | 47.20 | 28.57 | 23.96 | 53.33 | 78.69 |
115
+ | LLaMA2-13B-Chat | 13B | 2T | 53.51 | 28.35 | 37.15 | 59.73 | 82.47 |
116
+
117
+ ## **Citations**
118
+ If you use this model in your research, please cite it as follows:
119
+ ```
120
+ @article{jaghouar2024intellect,
121
+ title={INTELLECT-1 Technical Report.},
122
+ author={Jaghouar, Sami and Ong, Jack Min and Basra, Manveer and Obeid, Fares and Straube, Jannik and Keiblinger, Michael and Bakouch, Elie and Atkins, Lucas and Panahi, Maziyar and Goddard, Charles and Ryabinin, Max and Hagemann, Johannes},
123
+ journal={arXiv preprint},
124
+ year={2024}
125
+ }
126
+ ```