Raj-Maharajwala commited on
Commit
26a8e83
·
verified ·
1 Parent(s): 9058dfa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -3
README.md CHANGED
@@ -48,12 +48,11 @@ Fine-tuned for insurance-related queries and conversations.
48
  - **Quantized Model:** Raj-Maharajwala/Open-Insurance-LLM-Llama3-8B-GGUF
49
  - **Model Architecture:** Llama
50
  - **Quantization:** 8-bit (Q8_0), 5-bit (Q5_K_M), 4-bit (Q4_K_M), 16-bit
 
51
  - **Developer:** Raj Maharajwala
52
  - **License:** llama3
53
  - **Language:** English
54
-
55
- ## Finetuned Dataset:
56
- - **InsuranceQA**
57
 
58
  ## Setup Instructions
59
 
@@ -79,6 +78,11 @@ export FORCE_CMAKE=1
79
  CMAKE_ARGS="-DGGML_METAL=on" pip install --upgrade --force-reinstall llama-cpp-python==0.3.2 --no-cache-dir
80
  ```
81
 
 
 
 
 
 
82
  ### Dependencies
83
 
84
  Then install dependencies (inference_requirements.txt) attached under `Files and Versions`:
 
48
  - **Quantized Model:** Raj-Maharajwala/Open-Insurance-LLM-Llama3-8B-GGUF
49
  - **Model Architecture:** Llama
50
  - **Quantization:** 8-bit (Q8_0), 5-bit (Q5_K_M), 4-bit (Q4_K_M), 16-bit
51
+ - **Finetuned Dataset**: InsuranceQA
52
  - **Developer:** Raj Maharajwala
53
  - **License:** llama3
54
  - **Language:** English
55
+ -
 
 
56
 
57
  ## Setup Instructions
58
 
 
78
  CMAKE_ARGS="-DGGML_METAL=on" pip install --upgrade --force-reinstall llama-cpp-python==0.3.2 --no-cache-dir
79
  ```
80
 
81
+ #### For Windows Users (CPU Support)
82
+ ```bash
83
+ pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
84
+ ```
85
+
86
  ### Dependencies
87
 
88
  Then install dependencies (inference_requirements.txt) attached under `Files and Versions`: