wdevazelhes commited on
Commit
6ee25ba
1 Parent(s): 677a2de

push model card

Browse files
Files changed (1) hide show
  1. README.md +48 -103
README.md CHANGED
@@ -1,6 +1,7 @@
1
  ---
2
  language:
3
  - en
 
4
  - es
5
  - pt
6
  tags:
@@ -11,127 +12,61 @@ license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
11
  ---
12
 
13
 
 
14
 
15
- # Table of Contents
16
 
17
- 0. [TL;DR](#TL;DR)
18
- 1. [Model Details](#model-details)
19
- 2. [Usage](#usage)
20
- 3. [Training Details](#training-details)
21
- 4. [Evaluation](#evaluation)
22
 
 
23
 
24
- # TL;DR
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
- # Model Details
27
 
28
- ⚠️ **This is a raw, pretrained model, which should be further finetuned for most usecases.**
29
-
30
- ## Model Description
31
-
32
- - **Developed by:** [https://www.tii.ae](https://www.tii.ae)
33
- - **Model type:** Causal decoder-only
34
- - **Architecture:** Transformer-base
35
- - **Language(s) (NLP):** Mainly English
36
- - **License:** TII Falcon-LLM License 2.0
37
-
38
- <br>
39
-
40
- # Usage
41
-
42
- Find below some example scripts on how to use the model in `transformers` (Make sure to have the latest transformers, or the one built from source):
43
-
44
- ## Using the Pytorch model with 🤗 transformers
45
-
46
- ### Running the model on a CPU
47
-
48
- <details>
49
- <summary> Click to expand </summary>
50
-
51
- ```python
52
- from transformers import AutoTokenizer, AutoModelForCausalLM
53
-
54
- tokenizer = AutoTokenizer.from_pretrained("tiiuae/Falcon3-7B-Base")
55
- model = AutoModelForCausalLM.from_pretrained("tiiuae/Falcon3-7B-Base")
56
-
57
- input_text = "Question: How many hours in one day? Answer: "
58
- input_ids = tokenizer(input_text, return_tensors="pt").input_ids
59
-
60
- outputs = model.generate(input_ids)
61
- print(tokenizer.decode(outputs[0]))
62
- ```
63
-
64
- </details>
65
-
66
- ### Running the model on a GPU
67
-
68
- <details>
69
- <summary> Click to expand </summary>
70
-
71
- ```python
72
- # pip install accelerate
73
- from transformers import AutoTokenizer, AutoModelForCausalLM
74
-
75
- tokenizer = AutoTokenizer.from_pretrained("tiiuae/Falcon3-7B-Base")
76
- model = AutoModelForCausalLM.from_pretrained("tiiuae/Falcon3-7B-Base", device_map="auto")
77
-
78
- input_text = "Question: How many hours in one day? Answer: "
79
- input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
80
-
81
- outputs = model.generate(input_ids)
82
- print(tokenizer.decode(outputs[0]))
83
- ```
84
-
85
- </details>
86
-
87
- ### Running the model on a GPU using `torch.compile`
88
 
89
  <details>
90
  <summary> Click to expand </summary>
91
 
92
  ```python
93
  import torch
94
- from transformers import AutoTokenizer, AutoModelForCausalLM
95
-
96
- tokenizer = AutoTokenizer.from_pretrained("tiiuae/Falcon3-7B-Base")
97
- model = AutoModelForCausalLM.from_pretrained("tiiuae/Falcon3-7B-Base", torch_dtype=torch.bfloat16).to(0)
98
-
99
- model = torch.compile(model)
100
-
101
- input_text = "Question: How many hours in one day? Answer: "
102
- input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
103
-
104
- outputs = model.generate(input_ids)
105
- print(tokenizer.decode(outputs[0]))
106
  ```
107
 
108
  </details>
109
 
 
110
 
111
- # Training Details
112
-
113
- ## Training Data
114
-
115
- Falcon3-7B is trained on 15 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data.
116
-
117
- ## Training Procedure
118
-
119
- Falcon3-7B is trained on 256 H100 nodes (world size 2048).
120
 
121
- ### Training Hyperparameters
122
 
123
- | **Hyperparameter** | **Value** | **Comment** |
124
- |--------------------|------------|---------------------------------------|
125
- | Precision | `bfloat16` | |
126
- | Optimizer | AdamW | |
127
- | Max learning rate | 6e-4 | Following a WSD (warmup-stable-decay) |
128
- | | | learning rate scheduler |
129
- | Weight decay | 1e-1 | |
130
- | z-loss | 1e-4 | |
131
- | Batch size | Variable | Batch size was gradually increased |
132
- | | | during the training |
133
 
134
- # Evaluation
135
  <table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
136
  <colgroup>
137
  <col style="width: 10%;">
@@ -251,7 +186,17 @@ Falcon3-7B is trained on 256 H100 nodes (world size 2048).
251
  </tbody>
252
  </table>
253
 
 
 
254
 
 
 
255
 
256
-
257
- # Citation
 
 
 
 
 
 
 
1
  ---
2
  language:
3
  - en
4
+ - fr
5
  - es
6
  - pt
7
  tags:
 
12
  ---
13
 
14
 
15
+ # Falcon3-1B-Base
16
 
17
+ **Falcon3** family of Open Foundation Models is a set of pretrained and instruct LLMs ranging from 1B to 10B parameters.
18
 
19
+ This repository contains the **Falcon3-1B-Base**. It achieves strong results on reasoning, language understanding, instruction following, code and mathematics tasks.
20
+ Falcon3-1B-Base supports 4 languages (English, French, Spanish, Portuguese) and a context length of up to 32K.
21
+ It was pruned in terms of depth, width, number of heads, and embedding channels from a larger 3B Falcon model, and was efficiently trained on only 80 GT using a knowledge distillation objective.
 
 
22
 
23
+ ⚠️ **This is a raw, pretrained model, which should be further finetuned using SFT, RLHF, continued pretraining, etc. for most use cases.**
24
 
25
+ ## Model Details
26
+ - Architecture
27
+ - Transformer-based causal decoder-only architecture
28
+ - 22 decoder blocks
29
+ - Grouped Query Attention (GQA) for faster inference: 8 query heads and 4 key-value heads
30
+ - Wider head dimension: 256
31
+ - High RoPE value to support long context understanding: 1000042
32
+ - Uses SwiGLU and RMSNorm
33
+ - 32K context length
34
+ - 131K vocab size
35
+ - Pruned and healed using larger Falcon models (3B and 7B respectively) on only 80 Gigatokens of datasets comprising of web, code, STEM, high quality and mutlilingual data using 256 H100 GPU chips
36
+ - Supports EN, FR, ES, PT
37
+ - Developed by [Technology Innovation Institute](https://www.tii.ae)
38
+ - License: TII Falcon-LLM License 2.0
39
+ - Model Release Date: December 2024
40
 
 
41
 
42
+ ## Getting started
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
 
44
  <details>
45
  <summary> Click to expand </summary>
46
 
47
  ```python
48
  import torch
49
+ from transformers import pipeline
50
+
51
+ pipe = pipeline(
52
+ "text-generation",
53
+ model="tiiuae/Falcon3-1B-Base",
54
+ torch_dtype=torch.bfloat16,
55
+ device_map="auto"
56
+ )
57
+ response = pipe("Question: How many hours in one day? Answer: ")
58
+ print(response[0]['generated_text'])
 
 
59
  ```
60
 
61
  </details>
62
 
63
+ <br>
64
 
65
+ ## Benchmarks
66
+ We report in the following table our internal pipeline benchmarks:
 
 
 
 
 
 
 
67
 
 
68
 
 
 
 
 
 
 
 
 
 
 
69
 
 
70
  <table border="1" style="width: 100%; text-align: center; border-collapse: collapse;">
71
  <colgroup>
72
  <col style="width: 10%;">
 
186
  </tbody>
187
  </table>
188
 
189
+ ## Technical Report
190
+ Coming soon....
191
 
192
+ ## Citation
193
+ If Falcon3 family were helpful to your work, feel free to give us a cite.
194
 
195
+ ```
196
+ @misc{Falcon3,
197
+ title = {The Falcon 3 family of Open Models},
198
+ author = {TII Team},
199
+ month = {December},
200
+ year = {2024}
201
+ }
202
+ ```