English
ibibek commited on
Commit
075afd0
1 Parent(s): ca34f99

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## TaCo-Maithili-33B 🌮
2
+
3
+ **Description**
4
+
5
+ This repo contains the TaCo Maithili 33B model LoRA adapter.
6
+
7
+ Motivated by the theory of parameter-efficient fine-tuning using LoRA and the Chain of Thought (Wei 2022) process, we propose a new method called TaCo. This method uses translation in the Chain of Thought process to create a multilingual model. In this work, we have used the Chain of Thought process to teach language models to translate the instruction to English first, generate the required response in English, and then translate it back to low-resource languages. For training, we employed the curriculum learning strategy. This strategy utilizes the fine-tuned Guanaco-33B model first and then applies instruction tuning using the TaCo method.
8
+
9
+ The datasets used to train this model are available at saillab/taco-datasets.
10
+
11
+ ⚠️ The TaCo model has not been tested for toxicity and harmful response generation. It is purely intended for research and academic purposes only.
12
+
13
+ **License and Intended Use**
14
+ The TaCo adapter weights are trained on top of the Guanaco-33B (timdettmers/guanaco-33b-merged) model, which is based on the LLaMA model. We used the Alpaca-52K and Dolly-15K datasets and translated them using Google Cloud Translate. We advise you to look into the licensing of Guanaco-33B and the LLaMA model, as well as the terms of usage for Google Cloud Translation, before using this model.