Text Generation
GGUF
nlp
InkubaLM
africanLLM
africa
llm
munish0838 commited on
Commit
b0084f5
·
verified ·
1 Parent(s): b247498

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +178 -0
README.md ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ language:
5
+ - en
6
+ - sw
7
+ - zu
8
+ - xh
9
+ - ha
10
+ - yo
11
+ pipeline_tag: text-generation
12
+ tags:
13
+ - nlp
14
+ - InkubaLM
15
+ - africanLLM
16
+ - africa
17
+ - llm
18
+ datasets:
19
+ - lelapa/Inkuba-Mono
20
+ license: cc-by-nc-4.0
21
+
22
+ ---
23
+
24
+ ![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
25
+
26
+ # QuantFactory/InkubaLM-0.4B-GGUF
27
+ This is quantized version of [lelapa/InkubaLM-0.4B](https://huggingface.co/lelapa/InkubaLM-0.4B) created using llama.cpp
28
+
29
+ # Original Model Card
30
+
31
+ # InkubaLM-0.4B: Small language model for low-resource African Languages
32
+
33
+ <!-- Provide a quick summary of what the model is/does. -->
34
+
35
+
36
+ ![ ](InkubaLM.png)
37
+
38
+ ## Model Details
39
+ InkubaLM has been trained from scratch using 1.9 billion tokens of data for five African languages, along with English and French data, totaling 2.4 billion tokens of data.
40
+ Similar to the model architecture used for MobileLLM, we trained this InkubaLM with a parameter size of 0.4 billion and a vocabulary size of 61788.
41
+ For detailed information on training, benchmarks, and performance, please refer to our full [blog post](https://medium.com/@lelapa_ai/inkubalm-a-small-language-model-for-low-resource-african-languages-dc9793842dec).
42
+ ### Model Description
43
+
44
+ <!-- Provide a longer summary of what this model is. -->
45
+
46
+
47
+
48
+ - **Developed by:** [Lelapa AI](https://lelapa.ai/) - Fundamental Research Team.
49
+ - **Model type:** Small Language Model (SLM) for five African languages built using the architecture design of LLaMA-7B.
50
+ - **Language(s) (NLP):** isiZulu, Yoruba, Swahili, isiXhosa, Hausa, English and French.
51
+ - **License:** CC BY-NC 4.0.
52
+
53
+ ### Model Sources
54
+
55
+ <!-- Provide the basic links for the model. -->
56
+
57
+ - **Repository:** TBD
58
+ - **Paper :** [InkubaLM](https://arxiv.org/pdf/2408.17024)
59
+
60
+
61
+ ## How to Get Started with the Model
62
+
63
+ Use the code below to get started with the model.
64
+
65
+ ``` python
66
+ pip install transformers
67
+
68
+ ```
69
+ # Running the model on CPU/GPU/multi GPU
70
+ ## - Running the model on CPU
71
+ ``` Python
72
+ from transformers import AutoTokenizer, AutoModelForCausalLM
73
+
74
+ tokenizer = AutoTokenizer.from_pretrained("lelapa/InkubaLM-0.4B",trust_remote_code=True)
75
+ model = AutoModelForCausalLM.from_pretrained("lelapa/InkubaLM-0.4B",trust_remote_code=True)
76
+
77
+ text = "Today I planned to"
78
+ inputs = tokenizer(text, return_tensors="pt")
79
+ input_ids = inputs.input_ids
80
+
81
+ # Create an attention mask
82
+ attention_mask = inputs.attention_mask
83
+
84
+ # Generate outputs using the attention mask
85
+ outputs = model.generate(input_ids, attention_mask=attention_mask, max_length=60,pad_token_id=tokenizer.eos_token_id)
86
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
87
+
88
+ ```
89
+ ## - Using full precision
90
+ ```python
91
+ from transformers import AutoModelForCausalLM, AutoTokenizer
92
+
93
+ model = AutoModelForCausalLM.from_pretrained("lelapa/InkubaLM-0.4B", trust_remote_code=True)
94
+ tokenizer = AutoTokenizer.from_pretrained("lelapa/InkubaLM-0.4B", trust_remote_code=True)
95
+
96
+ model.to('cuda')
97
+ text = "Today i planned to "
98
+ input_ids = tokenizer(text, return_tensors="pt").to('cuda').input_ids
99
+ outputs = model.generate(input_ids, max_length=1000, repetition_penalty=1.2, pad_token_id=tokenizer.eos_token_id)
100
+ print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())
101
+
102
+ ```
103
+ ## - Using torch.bfloat16
104
+ ``` python
105
+ import torch
106
+ from transformers import AutoTokenizer, AutoModelForCausalLM
107
+ checkpoint = "lelapa/InkubaLM-0.4B"
108
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
109
+
110
+ model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto",torch_dtype=torch.bfloat16, trust_remote_code=True)
111
+ inputs = tokenizer.encode("Today i planned to ", return_tensors="pt").to("cuda")
112
+ outputs = model.generate(inputs)
113
+ print(tokenizer.decode(outputs[0]))
114
+
115
+ ```
116
+ ## - Using quantized Versions via bitsandbytes
117
+
118
+ ``` python
119
+ pip install bitsandbytes accelerate
120
+ ```
121
+ ``` python
122
+ from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
123
+ quantization_config = BitsAndBytesConfig(load_in_8bit=True) # to use 4bit use `load_in_4bit=True` instead
124
+ checkpoint = "lelapa/InkubaLM-0.4B"
125
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
126
+ model = AutoModelForCausalLM.from_pretrained(checkpoint, quantization_config=quantization_config, trust_remote_code=True)
127
+ inputs = tokenizer.encode("Today i planned to ", return_tensors="pt").to("cuda")
128
+ outputs = model.generate(inputs)
129
+ print(tokenizer.decode(outputs[0]))
130
+
131
+ ```
132
+
133
+
134
+ ## Training Details
135
+
136
+ ### Training Data
137
+
138
+ - For training, we used the [Inkuba-mono](https://huggingface.co/datasets/lelapa/Inkuba-Mono) dataset.
139
+
140
+
141
+
142
+ #### Training Hyperparameters
143
+
144
+ | Hyperparameter | Value |
145
+ | ----------- | ----------- |
146
+ | Total Parameters | 0.422B |
147
+ | Hidden Size | 2048 |
148
+ | Intermediate Size (MLPs) | 5632 |
149
+ | Number of Attention Heads | 32 |
150
+ | Number of Hidden Layers | 8 |
151
+ | RMSNorm ɛ | 1e^-5 |
152
+ | Max Seq Length | 2048 |
153
+ | Vocab Size | 61788 |
154
+
155
+ ## Limitations
156
+ The InkubaLM model has been trained on multilingual datasets but does have some limitations. It is capable of understanding and generating content in five African languages: Swahili, Yoruba, Hausa, isiZulu, and isiXhosa, as well as English and French. While it can generate text on various topics, the resulting content may not always be entirely accurate, logically consistent, or free from biases found in the training data. Additionally, the model may sometimes use different languages when generating text. Nonetheless, this model is intended to be a foundational tool to aid research in African languages.
157
+
158
+ ## Ethical Considerations and Risks
159
+ InkubaLM is a small LM developed for five African languages. The model is evaluated only in sentiment analysis, machine translation, AfriMMLU, and AfriXNLI tasks and has yet to cover all possible evaluation scenarios. Similar to other language models, it is impossible to predict all of InkubaLM's potential outputs in advance, and in some cases, the model may produce inaccurate, biased, or objectionable responses. Therefore, before using the model in any application, the users should conduct safety testing and tuning tailored to their intended use.
160
+
161
+ ## Citation
162
+
163
+ ```
164
+ @article{tonja2024inkubalm,
165
+ title={InkubaLM: A small language model for low-resource African languages},
166
+ author={Tonja, Atnafu Lambebo and Dossou, Bonaventure FP and Ojo, Jessica and Rajab, Jenalea and Thior, Fadel and Wairagala, Eric Peter and Anuoluwapo, Aremu and Moiloa, Pelonomi and Abbott, Jade and Marivate, Vukosi and others},
167
+ journal={arXiv preprint arXiv:2408.17024},
168
+ year={2024}
169
+ }
170
+ ```
171
+
172
+ ## Model Card Authors
173
+
174
+ [Lelapa AI](https://lelapa.ai/) - Fundamental Research Team
175
+
176
+ ## Model Card Contact
177
+
178
+ [Lelapa AI](https://lelapa.ai/)