ollama commited on
Commit
26dbc03
1 Parent(s): 00bd1e3

Update Ollama instructions

Browse files
Files changed (1) hide show
  1. README.md +15 -7
README.md CHANGED
@@ -98,33 +98,41 @@ huggingface-cli download microsoft/Phi-3-mini-4k-instruct-gguf Phi-3-mini-4k-ins
98
 
99
  ## How to use with Ollama
100
 
101
- Assuming that you have already downloaded GGUF files, here is how you can use it with [Ollama](https://ollama.com/):
102
-
103
  1. **Install Ollama:**
104
 
105
  ```
106
  curl -fsSL https://ollama.com/install.sh | sh
107
  ```
108
 
109
- 2. **Get the Modelfile:**
 
 
 
 
 
 
 
 
 
 
110
 
111
  ```
112
  huggingface-cli download microsoft/Phi-3-mini-4k-instruct-gguf Modelfile_q4 --local-dir /path/to/your/local/dir
113
  ```
114
 
115
- 3. Build the Ollama Model:
116
  Use the Ollama CLI to create your model with the following command:
117
 
118
  ```
119
- ollama create phi3mini -f Modelfile_q4
120
  ```
121
 
122
- 4. **Run the *phi3mini* model:**
123
 
124
  Now you can run the Phi-3-Mini-4k-Instruct model with Ollama using the following command:
125
 
126
  ```
127
- ollama run phi3mini "Your prompt here"
128
  ```
129
 
130
  Replace "Your prompt here" with the actual prompt you want to use for generating responses from the model.
 
98
 
99
  ## How to use with Ollama
100
 
 
 
101
  1. **Install Ollama:**
102
 
103
  ```
104
  curl -fsSL https://ollama.com/install.sh | sh
105
  ```
106
 
107
+ 2. **Run the *phi3* model:**
108
+
109
+ ```
110
+ ollama run phi3
111
+ ```
112
+
113
+ ### Building from `Modelfile`
114
+
115
+ Assuming that you have already downloaded GGUF files, here is how you can use them with [Ollama](https://ollama.com/):
116
+
117
+ 1. **Get the Modelfile:**
118
 
119
  ```
120
  huggingface-cli download microsoft/Phi-3-mini-4k-instruct-gguf Modelfile_q4 --local-dir /path/to/your/local/dir
121
  ```
122
 
123
+ 2. Build the Ollama Model:
124
  Use the Ollama CLI to create your model with the following command:
125
 
126
  ```
127
+ ollama create phi3 -f Modelfile_q4
128
  ```
129
 
130
+ 3. **Run the *phi3* model:**
131
 
132
  Now you can run the Phi-3-Mini-4k-Instruct model with Ollama using the following command:
133
 
134
  ```
135
+ ollama run phi3 "Your prompt here"
136
  ```
137
 
138
  Replace "Your prompt here" with the actual prompt you want to use for generating responses from the model.