VishnuPJ commited on
Commit
41bcdf8
1 Parent(s): 85c91db

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +149 -3
README.md CHANGED
@@ -1,3 +1,149 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # MalayaLLM: Gemma-2-2B [മലയാളം/Malayalam]
5
+
6
+ <img src="https://github.com/VishnuPJ/MalayaLLM-Gemma2-9B/assets/54801493/19ea32ea-04ba-4198-aab3-cbcdd0c3bc7b" alt="Baby MalayaLLM" width="300" height="auto">
7
+
8
+ # Introducing the Developer:
9
+ Discover the mind behind this model and stay updated on their contributions to the field
10
+ https://www.linkedin.com/in/vishnu-prasad-j/
11
+
12
+ # Model description
13
+ The MalayaLLM models have been improved and customized expanding upon the groundwork laid by the original Gemma-2-2B model.
14
+
15
+ - **Model type:** A 2B Gemma-2 finetuned model on Malayalam tokens.
16
+ - **Language(s):** Malayalam and English
17
+ - **Datasets:**
18
+ * [CohereForAI/aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset)
19
+ * [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia)
20
+ - **Source Model:** [MalayaLLM_Gemma_2_2B_Base_V1.0](https://huggingface.co/VishnuPJ/MalayaLLM_Gemma_2_2B_Base_V1.0)
21
+ - **Instruct Model:** [MalayaLLM_Gemma_2_2B_Instruct_V1.0](https://huggingface.co/VishnuPJ/MalayaLLM_Gemma_2_2B_Instruct_V1.0)
22
+ - **GGUF Model:** [MalayaLLM_Gemma_2_2B_Instruct_V1.0_GGUF](https://huggingface.co/VishnuPJ/MalayaLLM_Gemma_2_2B_Instruct_V1.0_GGUF)
23
+ - **Training Precision:** `float16`
24
+
25
+ # Old Model
26
+ Gemma-7B,9B trained model is here :[MalayaLLM:Gemma-7B](https://huggingface.co/collections/VishnuPJ/malayallm-malayalam-gemma-7b-66851df5e809bed18c2abd25)
27
+
28
+
29
+ ## 💾 Installation Instructions
30
+ ### Conda Installation
31
+ Select either `pytorch-cuda=11.8` for CUDA 11.8 or `pytorch-cuda=12.1` for CUDA 12.1. If you have `mamba`, use `mamba` instead of `conda` for faster solving. See this [Github issue](https://github.com/unslothai/unsloth/issues/73) for help on debugging Conda installs.
32
+ ```bash
33
+ conda create --name unsloth_env \
34
+ python=3.10 \
35
+ pytorch-cuda=<11.8/12.1> \
36
+ pytorch cudatoolkit xformers -c pytorch -c nvidia -c xformers \
37
+ -y
38
+ conda activate unsloth_env
39
+
40
+ pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
41
+
42
+ pip install --no-deps "trl<0.9.0" peft accelerate bitsandbytes
43
+ ```
44
+
45
+ ## A simple example code
46
+
47
+ ```python
48
+ # Installs Unsloth, Xformers (Flash Attention) and all other packages!
49
+ #!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
50
+ #!pip install --no-deps xformers "trl<0.9.0" peft accelerate bitsandbytes
51
+
52
+ import sentencepiece as spm
53
+ from unsloth import FastLanguageModel
54
+ import torch
55
+ max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
56
+ dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
57
+ load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
58
+ model, tokenizer = FastLanguageModel.from_pretrained(
59
+ model_name="VishnuPJ/MalayaLLM_Gemma_2_9B_Instruct_V1.0",
60
+ max_seq_length=max_seq_length,
61
+ dtype=dtype,
62
+ load_in_4bit=load_in_4bit,
63
+ )
64
+ EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN
65
+ FastLanguageModel.for_inference(model) # Enable native 2x faster inference
66
+ #### Giving Instruction with Input
67
+ '''
68
+ alpaca_prompt_1 = """ഒരു ചുമതല വിവരിക്കുന്ന ഒരു നിർദ്ദേശം ചുവടെയുണ്ട്.
69
+ അഭ്യർത്ഥന ശരിയായി പൂർത്തിയാക്കുന്ന ഒരു പ്രതികരണം എഴുതുക.".
70
+ ### നിർദ്ദേശം:
71
+ {}
72
+ ### ഇൻപുട്ട്:
73
+ {}
74
+ ### പ്രതികരണം:
75
+ {}"""
76
+ inputs = tokenizer([
77
+ alpaca_prompt_1.format(
78
+ # "Continue the fibonnaci sequence.", # instruction
79
+ """താഴെ ഉള്ള വാക്യത്തിൽ "അത്" എന്ന് പറയുന്നത് എന്തിനെ ആണ് ?""", # instruction
80
+ """ ഒരു വാഹനം കയറ്റം കയറുക ആയിരുന്നു .അതിൽ 4 ആൾക്കാർ ഉണ്ടായിരുന്നു. """, # input
81
+ "", # output - leave this blank for generation!
82
+ )
83
+ ], return_tensors = "pt").to("cuda")
84
+ outputs = model.generate(**inputs, max_new_tokens=128, use_cache=True)
85
+ # Printing the result
86
+ print(tokenizer.batch_decode(outputs)[0].split("പ്രതികരണം:\n")[-1])
87
+ '''
88
+ ## Giving Instruction only.
89
+ alpaca_prompt_2 = """ഒരു ചുമതല വിവരിക്കുന്ന ഒരു നിർദ്ദേശം ചുവടെയുണ്ട്.
90
+ അഭ്യർത്ഥന ശരിയായി പൂർത്തിയാക്കുന്ന ഒരു പ്രതികരണം എഴുതുക.".
91
+ ### നിർദ്ദേശം:
92
+ {}
93
+ ### പ്രതികരണം:
94
+ {}"""
95
+ while True:
96
+ # Taking user input for the instruction
97
+ instruction = input("Enter the instruction (or type 'exit' to quit): ")
98
+ if instruction.lower() == 'exit':
99
+ break
100
+ # Preparing the input for the model
101
+ inputs = tokenizer([
102
+ alpaca_prompt_2.format(
103
+ instruction,
104
+ "", # output - leave this blank for generation!
105
+ )
106
+ ], return_tensors="pt").to("cuda")
107
+ # Generating the output
108
+ outputs = model.generate(**inputs, max_new_tokens=128, use_cache=True)
109
+ # Printing the result
110
+ print(tokenizer.batch_decode(outputs)[0].split("പ്രതികരണം:\n")[-1])
111
+ print("Program terminated.")
112
+ ''''
113
+ ```
114
+ ## Example Output
115
+ ```
116
+ Enter instruction (or 'exit' to end): ഒരു സമചതുരത്തിന്റെ ഒരു വശം 4 cm ആണെങ്കിൽ , അതിന്റെ area കണ്ടുപിടിക്കുക..
117
+ സമചതുരത്തിന്റെ area 16 cm2 ആണ്.<eos>.
118
+ Enter instruction (or 'exit' to end): ഇന്ത്യയുടെ അടുത്ത് സ്ഥിതി ചെയുന്ന നാല് രാജ്യങ്ങളുടെ പേര് പറയുക.
119
+ "ഇന്ത്യയ്ക്ക് സമീപമുള്ള നാല് രാജ്യങ്ങൾ ഇവയാണ്:
120
+ - നേപ്പാൾ
121
+ - ഭൂട്ടാൻ
122
+ - ടിബറ്റ് (ചൈന)
123
+ - പാകിസ്ഥാൻ"<eos>
124
+ Enter instruction (or 'exit' to end):exit
125
+ ```
126
+
127
+ ## How to run GGUF
128
+
129
+ - #### llama.cpp Web Server
130
+ - The web server is a lightweight HTTP server that can be used to serve local models and easily connect them to existing clients.
131
+ - #### Building llama.cpp
132
+ - To build `llama.cpp` locally, follow the instructions provided in the [build documentation](https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md).
133
+ - #### Running llama.cpp as a Web Server
134
+ - Once you have built `llama.cpp`, you can run it as a web server. Below is an example of how to start the server:
135
+ ```sh
136
+ llama-server.exe -m gemma_2_9b_instruction.Q4_K_M.gguf -ngl 42 -c 128 -n 100
137
+ ```
138
+ - #### Accessing the Web UI
139
+ - After starting the server, you can access the basic web UI via your browser at the following address:
140
+ [http://localhost:8080](http://localhost:8080)
141
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64e65800e44b2668a56f9731/te7d5xjMrtk6RDMEAxmCy.png" alt="Baby MalayaLLM" width="600" height="auto">
142
+
143
+ ## Made Using UNSLOTH
144
+
145
+ Thanks to [Unsloth](https://github.com/unslothai/unsloth), the process of fine-tuning large language models (LLMs) has become much easier and more efficient.
146
+
147
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/64e65800e44b2668a56f9731/WPt_FKUWDdc6--l_Qnb-G.png" alt="Unsloth" width="200" height="auto">
148
+
149
+ # 🌟Happy coding💻🌟