Lego-MT commited on
Commit
430538f
1 Parent(s): 46e7125

update README

Browse files
Files changed (1) hide show
  1. README.md +77 -3
README.md CHANGED
@@ -1,3 +1,77 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ### Model Sources
3
+
4
+ Paper: LlamaX: Scaling Linguistic Horizons of LLM by Enhancing Translation Capabilities Beyond 100 Languages
5
+
6
+ Link: https://arxiv.org/pdf/2407
7
+
8
+ Repository: https://github.com/CONE-MT/
9
+
10
+ ### Model Description
11
+ LlamaX is a multilingual language model, developed through continued pre-training on Llama2, and supports over 100 languages.
12
+ Its translation capabilities far exceed general models of the same scale, and it can serve as a base model to support downstream multilingual tasks.
13
+
14
+
15
+ ### 🔥 Effortless Multilingual Translation with a Simple Prompt
16
+
17
+ LlamaX supports translation between more than 100 languages, surpassing the performance of similarly scaled LLMs.
18
+
19
+ ```angular2html
20
+ def prompt_template(query, src_language, trg_language):
21
+ instruction = f'Translate the following sentences from {src_language} to {trg_language}.'
22
+ prompt = (
23
+ 'Below is an instruction that describes a task, paired with an input that provides further context. '
24
+ 'Write a response that appropriately completes the request.\n'
25
+ f'### Instruction:\n{instruction}\n'
26
+ f'### Input:\n{query}\n### Response:'
27
+ )
28
+ return prompt
29
+ ```
30
+
31
+ And then run the following codes to execute translation:
32
+ ```angular2html
33
+ from transformers import AutoTokenizer, LlamaForCausalLM
34
+
35
+ model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS)
36
+ tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER)
37
+
38
+ query = "你好,今天是个好日子"
39
+ prompt = Prompt_template(query, 'Chinese', 'English')
40
+ inputs = tokenizer(prompt, return_tensors="pt")
41
+
42
+ generate_ids = model.generate(inputs.input_ids, max_length=30)
43
+ tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
44
+ # => "Hello, today is a good day"
45
+ ```
46
+
47
+ ### 🔥 Effective Base Model for Multilingual Task
48
+
49
+ LlamaX preserves its efficacy in general tasks and improves the performance on multilingual tasks.
50
+ We fine-tuned LlamaX using only the English training set of downstream task, which also shows significant improvements in non-English. We provide fine-tuning LlamaX models for the following three tasks:
51
+
52
+ Math Reasoning: https://huggingface.co/TransLLaMA/TransLLaMA2-7B-MetaMath
53
+
54
+ Commonsense Reasoning: https://huggingface.co/TransLLaMA/TransLLaMA2-7B-X-CSQA
55
+
56
+ Natural Language Inference: https://huggingface.co/TransLLaMA/TransLLaMA2-7B-XNLI
57
+
58
+
59
+ ### Supported Languages
60
+ Akrikaans (af), Amharic (am), Arabic (ar), Armenian (hy), Assamese (as), Asturian (ast), Azerbaijani (az), Belarusian (be), Bengali (bn), Bosnian (bs), Bulgarian (bg), Burmese (my), Catalan (ca), Cebuano (ceb), Chinese Simpl (zho), Chinese Trad (zho), Croatian (hr), Czech (cs), Danish (da), Dutch (nl), English (en), Estonian (et), Filipino (tl), Finnish (fi), French (fr), Fulah (ff), Galician (gl), Ganda (lg), Georgian (ka), German (de), Greek (el), Gujarati (gu), Hausa (ha), Hebrew (he), Hindi (hi), Hungarian (hu), Icelandic (is), Igbo (ig), Indonesian (id), Irish (ga), Italian (it), Japanese (ja), Javanese (jv), Kabuverdianu (kea), Kamba (kam), Kannada (kn), Kazakh (kk), Khmer (km), Korean (ko), Kyrgyz (ky), Lao (lo), Latvian (lv), Lingala (ln), Lithuanian (lt), Luo (luo), Luxembourgish (lb), Macedonian (mk), Malay (ms), Malayalam (ml), Maltese (mt), Maori (mi), Marathi (mr), Mongolian (mn), Nepali (ne), Northern Sotho (ns), Norwegian (no), Nyanja (ny), Occitan (oc), Oriya (or), Oromo (om), Pashto (ps), Persian (fa), Polish (pl), Portuguese (pt), Punjabi (pa), Romanian (ro), Russian (ru), Serbian (sr), Shona (sn), Sindhi (sd), Slovak (sk), Slovenian (sl), Somali (so), Sorani Kurdish (ku), Spanish (es), Swahili (sw), Swedish (sv), Tajik (tg), Tamil (ta), Telugu (te), Thai (th), Turkish (tr), Ukrainian (uk), Umbundu (umb), Urdu (ur), Uzbek (uz), Vietnamese (vi), Welsh (cy), Wolof (wo), Xhosa (xh), Yoruba (yo), Zulu (zu)
61
+
62
+
63
+ ### Model Index
64
+ | Model | TransLlama | TransLlama-Alpaca |
65
+ |---------|----------------------------------------------------------|-----------------------------------------------------------------|
66
+ | Llama-2 | [Link](https://huggingface.co/TransLLaMA/TransLLaMA2-7B) | [Link](https://huggingface.co/TransLLaMA/TransLLaMA2-7B-Alpaca) |
67
+ | Llama-3 | [Link](https://huggingface.co/TransLLaMA/TransLLaMA3-8B) | [Link](https://huggingface.co/TransLLaMA/TransLLaMA3-8B-Alpaca) |
68
+
69
+ ### Citation
70
+ If our model helps your work, please cite this paper:
71
+
72
+ ```
73
+ @inproceedings{Huang2024MindMergerEB,
74
+ title={XLLaMA2: Scaling Linguistic Horizons of LLM by Enhancing Translation Capabilities Beyond 100 Languages},
75
+ year={2024},
76
+ }
77
+ ```