hierholzer
commited on
Commit
•
77ce926
1
Parent(s):
fa764b1
Update README.md
Browse files
README.md
CHANGED
@@ -147,43 +147,23 @@ Ollama does have a Model Library where you can download models:
|
|
147 |
```shell
|
148 |
https://ollama.com/library
|
149 |
```
|
150 |
-
This Model Library offers
|
151 |
-
However
|
152 |
-
|
|
|
|
|
153 |
| # | Running the 70B quantized version of Llama 3.3-Instruct with Ollama |
|
154 |
|----|----------------------------------------------------------------------------------------------|
|
155 |
-
| 1. |
|
156 |
-
| 2. |
|
157 |
```shell
|
158 |
-
|
159 |
-
|
160 |
-
PARAMETER stop "<|im_start|>"
|
161 |
-
PARAMETER stop "<|im_end|>"
|
162 |
-
TEMPLATE """
|
163 |
-
<|im_start|>system
|
164 |
-
<|im_end|>
|
165 |
-
<|im_start|>user
|
166 |
-
<|im_end|>
|
167 |
-
<|im_start|>assistant
|
168 |
-
"""
|
169 |
-
```
|
170 |
-
*Replace ./Llama-3.3-70B-Instruct-Q4_K_M.gguf with the correct version and actual path to the GGUF file you downloaded.
|
171 |
-
The TEMPLATE line defines the prompt format using system, user, and assistant roles.
|
172 |
-
You can customize this based on your use case.*
|
173 |
-
| # | Running the 70B quantized version of Llama 3.3-Instruct with Ollama - *continued* |
|
174 |
-
|----|-----------------------------------------------------------------------------------|
|
175 |
-
| 3. | Now, build the Ollama model using the ollama create command: |
|
176 |
-
```shell
|
177 |
-
ollama create "Llama-3.3-70B-Instruct-Q4_K_M" -f ./Llama-3.3-70B-Instruct-Q4_K_M.gguf
|
178 |
```
|
179 |
-
*
|
180 |
-
model: ./Llama-3.3-70B-Instruct-Q4_K_M.gguf with the quantized model you are using.*
|
181 |
| # | Running the 70B quantized version of Llama 3.3-Instruct with Ollama - *continued* |
|
182 |
|----|-----------------------------------------------------------------------------------|
|
183 |
-
|
|
184 |
-
```shell
|
185 |
-
ollama run Llama-3.3-70B-Instruct-Q4_K_M
|
186 |
-
```
|
187 |
|
188 |
-------------------------------------------------
|
189 |
|
|
|
147 |
```shell
|
148 |
https://ollama.com/library
|
149 |
```
|
150 |
+
This Model Library offers many different LLM versions that you can use.
|
151 |
+
However at the time of writing this, there is no version of llama 3.3 Instruct offered in the ollama library.
|
152 |
+
|
153 |
+
If you would like to use Llama 3.3-Instruct (70B), do the following:
|
154 |
+
|
155 |
| # | Running the 70B quantized version of Llama 3.3-Instruct with Ollama |
|
156 |
|----|----------------------------------------------------------------------------------------------|
|
157 |
+
| 1. | Open up your terminal that you have Ollama Installed on. |
|
158 |
+
| 2. | Paste the following command: |
|
159 |
```shell
|
160 |
+
ollama run hf.co/hierholzer/Llama-3.3-70B-Instruct-GGUF:Q4_K_M
|
161 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
162 |
```
|
163 |
+
*Replace Q4_K_M with whatever version you would like to use from this repository.*
|
|
|
164 |
| # | Running the 70B quantized version of Llama 3.3-Instruct with Ollama - *continued* |
|
165 |
|----|-----------------------------------------------------------------------------------|
|
166 |
+
| 3. | This will download & run the model. It will also be saved for furture use. |
|
|
|
|
|
|
|
167 |
|
168 |
-------------------------------------------------
|
169 |
|