alijawad07 commited on
Commit
383ff98
1 Parent(s): ee0ba07

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +115 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - de
6
+ - es
7
+ - it
8
+ - pt
9
+ - ja
10
+ - ko
11
+ - zh
12
+ - ar
13
+ - el
14
+ - fa
15
+ - pl
16
+ - id
17
+ - cs
18
+ - he
19
+ - hi
20
+ - nl
21
+ - ro
22
+ - ru
23
+ - tr
24
+ - uk
25
+ - vi
26
+ license: cc-by-nc-4.0
27
+ tags:
28
+ - multilingual
29
+ - instruction-tuning
30
+ - awq
31
+ model_name: Aya-23-8B
32
+ base_model: CohereForAI/aya-23-8B
33
+ inference: false
34
+ model_creator: Cohere For AI
35
+ model_type: transformer
36
+ quantized_by: alijawad07
37
+ ---
38
+ # Aya-23-8B - AWQ Quantized
39
+ - Model creator: [Cohere For AI](https://huggingface.co/cohere-for-ai)
40
+ - Original model: [Aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B)
41
+
42
+ <!-- description start -->
43
+ ## Description
44
+
45
+ This repo contains AWQ model files for [Cohere's Aya-23-8B](https://huggingface.co/CohereForAI/aya-23-8B).
46
+
47
+ Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. The model focuses on pairing a highly performant pre-trained Command family of models with the recently released Aya Collection. The result is a powerful multilingual large language model serving 23 languages.
48
+
49
+ ### About AWQ
50
+
51
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
52
+
53
+ It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantized models. However, using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
54
+ <!-- description end -->
55
+
56
+ ## Model Summary
57
+
58
+ Aya 23 is an open weights research release of an instruction fine-tuned model with highly advanced multilingual capabilities. Aya 23 focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the recently released [Aya Collection](https://huggingface.co/datasets/CohereForAI/aya_collection). The result is a powerful multilingual large language model serving 23 languages.
59
+
60
+ It covers 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese.
61
+
62
+ Developed by: [Cohere For AI](https://cohere.for.ai) and [Cohere](https://cohere.com/)
63
+
64
+ - Model: aya-23-8B-AWQ-GEMM
65
+ - Model Size: 8 billion parameters
66
+ - Bits: 4
67
+ - Q-Group Size: 128
68
+
69
+ **This is an AWQ quantized version of the Aya-23-8B model using AutoAWQ.**
70
+
71
+ ### Usage
72
+
73
+ Please install transformers from the source repository that includes the necessary changes for this model.
74
+
75
+ ```python
76
+ # pip install transformers==4.41.1
77
+ # pip install autoawq
78
+ from transformers import AutoTokenizer
79
+ from awq import AutoAWQForCausalLM
80
+
81
+ quant_path = "path/to/quantized/model"
82
+
83
+ tokenizer = AutoTokenizer.from_pretrained(quant_path)
84
+ model = AutoAWQForCausalLM.from_quantized(quant_path, fuse_layers=True)
85
+
86
+ # Format message with the command-r-plus chat template
87
+ messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
88
+ input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
89
+ ## <BOS_TOKEN>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz
90
+
91
+ gen_tokens = model.generate(
92
+ input_ids,
93
+ max_new_tokens=100,
94
+ do_sample=True,
95
+ temperature=0.3,
96
+ )
97
+
98
+ gen_text = tokenizer.decode(gen_tokens[0])
99
+ print(gen_text)
100
+ ```
101
+
102
+ ## Model Details
103
+
104
+ **Input**: Models input text only.
105
+
106
+ **Output**: Models generate text only.
107
+
108
+ **Model Architecture**: Aya-23-8B is an auto-regressive language model that uses an optimized transformer architecture. After pretraining, this model is fine-tuned (IFT) to follow human instructions.
109
+
110
+ **Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
111
+
112
+ **Context length**: 8192
113
+
114
+
115
+ Please refer to the [Aya 23 technical report](https://cohere.com/research/papers/aya-command-23-8b-and-35b-technical-report-2024-05-23) for further details about the base model, data, instruction tuning, and evaluation.