nickmalhotra
commited on
Commit
•
16da29a
1
Parent(s):
7596e2e
Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ model-index:
|
|
18 |
value: 22.7
|
19 |
name: normalized accuracy
|
20 |
source:
|
21 |
-
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nickmalhotra/
|
22 |
name: Open LLM Leaderboard
|
23 |
- task:
|
24 |
type: text-generation
|
@@ -108,7 +108,7 @@ model-index:
|
|
108 |
# Model Card for Indus
|
109 |
|
110 |
<!-- Provide a quick summary of what the model is/does. [Optional] -->
|
111 |
-
The model is a
|
112 |
|
113 |
|
114 |
|
@@ -157,26 +157,32 @@ The model is a single shot fine tuned Instruct LLM in Hindi and dialects
|
|
157 |
## Model Description
|
158 |
|
159 |
<!-- Provide a longer summary of what this model is/does. -->
|
160 |
-
|
161 |
|
162 |
- **Developed by:** Nikhil Malhotra, Nilesh Brahme, Satish Mishra, Vinay Sharma (Makers Lab, TechMahindra)
|
163 |
- **Model type:** Foundational Language model
|
164 |
- **Language(s) (NLP):** hin, bho, mai, doi
|
165 |
- **License:** other
|
166 |
-
- **Parent Model:** It is
|
167 |
-
- **Resources for more information:**
|
|
|
168 |
|
169 |
|
170 |
|
171 |
# Uses
|
172 |
|
173 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
|
|
|
|
|
|
|
|
|
|
174 |
|
175 |
## Direct Use
|
176 |
|
177 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
178 |
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
|
179 |
-
|
180 |
|
181 |
|
182 |
|
@@ -184,7 +190,12 @@ The model is a single shot fine tuned Instruct LLM in Hindi and dialects
|
|
184 |
|
185 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
186 |
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
|
187 |
-
|
|
|
|
|
|
|
|
|
|
|
188 |
|
189 |
|
190 |
|
@@ -192,7 +203,7 @@ The model is a single shot fine tuned Instruct LLM in Hindi and dialects
|
|
192 |
|
193 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
194 |
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
|
195 |
-
|
196 |
|
197 |
|
198 |
|
@@ -203,12 +214,14 @@ The model is a single shot fine tuned Instruct LLM in Hindi and dialects
|
|
203 |
Significant research has explored bias and fairness issues with language models
|
204 |
(see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
|
205 |
Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
|
|
|
|
|
206 |
|
207 |
|
208 |
## Recommendations
|
209 |
|
210 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
211 |
-
|
212 |
|
213 |
|
214 |
|
|
|
18 |
value: 22.7
|
19 |
name: normalized accuracy
|
20 |
source:
|
21 |
+
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=nickmalhotra/ProjectIndus
|
22 |
name: Open LLM Leaderboard
|
23 |
- task:
|
24 |
type: text-generation
|
|
|
108 |
# Model Card for Indus
|
109 |
|
110 |
<!-- Provide a quick summary of what the model is/does. [Optional] -->
|
111 |
+
The model is a pretrained model in Hindi and dialects which is instruct tuned .
|
112 |
|
113 |
|
114 |
|
|
|
157 |
## Model Description
|
158 |
|
159 |
<!-- Provide a longer summary of what this model is/does. -->
|
160 |
+
TThe model is a pretrained model in Hindi and dialects which is instruct tuned.
|
161 |
|
162 |
- **Developed by:** Nikhil Malhotra, Nilesh Brahme, Satish Mishra, Vinay Sharma (Makers Lab, TechMahindra)
|
163 |
- **Model type:** Foundational Language model
|
164 |
- **Language(s) (NLP):** hin, bho, mai, doi
|
165 |
- **License:** other
|
166 |
+
- **Parent Model:** It is a grounds up model built on GPT-2 architecture starting from tokenizer to decoder
|
167 |
+
- **Resources for more information:** https://www.techmahindra.com/en-in/innovation/the-indus-project/
|
168 |
+
|
169 |
|
170 |
|
171 |
|
172 |
# Uses
|
173 |
|
174 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
175 |
+
Uses include question and answeting and conversation in Hindi and Dialects. The model would be reward tuned to be used across various industries
|
176 |
+
1. Call center
|
177 |
+
2. Healthcare
|
178 |
+
3. Automotive
|
179 |
+
4. Telecom
|
180 |
|
181 |
## Direct Use
|
182 |
|
183 |
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
184 |
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
|
185 |
+
Direct use is as a foundationla model on Hindi and dialects
|
186 |
|
187 |
|
188 |
|
|
|
190 |
|
191 |
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
192 |
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
|
193 |
+
Uses include question and answeting and conversation in Hindi and Dialects. The model would be reward tuned to be used across various industries
|
194 |
+
1. Call center
|
195 |
+
2. Healthcare
|
196 |
+
3. Automotive
|
197 |
+
4. Telecom
|
198 |
+
|
199 |
|
200 |
|
201 |
|
|
|
203 |
|
204 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
205 |
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
|
206 |
+
Cannot be used for fill in the blanks, Multiple Q&A etc. at the moment
|
207 |
|
208 |
|
209 |
|
|
|
214 |
Significant research has explored bias and fairness issues with language models
|
215 |
(see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
|
216 |
Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
|
217 |
+
We have taken care across various biases by trying to remove them from training data. However since the model is a generative model, it would tend to produce hallucinations.
|
218 |
+
Any disturbing or harmful sterotype produced by the model is purely un-intentional and coincidental.
|
219 |
|
220 |
|
221 |
## Recommendations
|
222 |
|
223 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
224 |
+
Recommendation is to not use biases and negative connotation for the model
|
225 |
|
226 |
|
227 |
|