cataluna84
commited on
Commit
•
ce59ec4
1
Parent(s):
c64dfc8
Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ Multilingual language models are typically large, requiring significant computat
|
|
11 |
|
12 |
Can we create multilingual models that maintain performance comparable to their larger models while reducing size, latency and inference speeds?
|
13 |
|
14 |
-
Techniques:
|
15 |
- Pruning
|
16 |
- SparseGPT | [GitHub](https://github.com/VishnuVardhanSaiLanka/sparsegpt/tree/aya)
|
17 |
- ShortGPT | [KLDBasedPruning & Perplexity Sensivities](https://github.com/rsk2327/DistAya/tree/main)
|
@@ -25,7 +25,7 @@ Techniques:
|
|
25 |
- KV Cache Compression
|
26 |
- Fine-Tuning | [GitHub](https://github.com/rsk2327/DistAya/tree/track/fine-tuning)
|
27 |
|
28 |
-
|
29 |
Initial 7 datasets unified, having 6.62M rows which includes the following:
|
30 |
- Bangla_Alpaca_Orca : Bangle
|
31 |
- Urdu_Instruct_News_Article_Generation: Urdu
|
@@ -35,7 +35,7 @@ Initial 7 datasets unified, having 6.62M rows which includes the following:
|
|
35 |
- Six_Millions_Instruction_Dataset_For_Arabic_Llm_Ft: Arabic
|
36 |
- instructv3: English
|
37 |
|
38 |
-
Get in touch with the team:
|
39 |
- Mayank Bhaskar -> [email protected]
|
40 |
- Ahmad Anis -> [email protected]
|
41 |
- Drishti Sharma -> [email protected]
|
|
|
11 |
|
12 |
Can we create multilingual models that maintain performance comparable to their larger models while reducing size, latency and inference speeds?
|
13 |
|
14 |
+
# Techniques:
|
15 |
- Pruning
|
16 |
- SparseGPT | [GitHub](https://github.com/VishnuVardhanSaiLanka/sparsegpt/tree/aya)
|
17 |
- ShortGPT | [KLDBasedPruning & Perplexity Sensivities](https://github.com/rsk2327/DistAya/tree/main)
|
|
|
25 |
- KV Cache Compression
|
26 |
- Fine-Tuning | [GitHub](https://github.com/rsk2327/DistAya/tree/track/fine-tuning)
|
27 |
|
28 |
+
# Datasets:
|
29 |
Initial 7 datasets unified, having 6.62M rows which includes the following:
|
30 |
- Bangla_Alpaca_Orca : Bangle
|
31 |
- Urdu_Instruct_News_Article_Generation: Urdu
|
|
|
35 |
- Six_Millions_Instruction_Dataset_For_Arabic_Llm_Ft: Arabic
|
36 |
- instructv3: English
|
37 |
|
38 |
+
## Get in touch with the team:
|
39 |
- Mayank Bhaskar -> [email protected]
|
40 |
- Ahmad Anis -> [email protected]
|
41 |
- Drishti Sharma -> [email protected]
|