qwp4w3hyb commited on
Commit
a20aa2d
1 Parent(s): 27edf9c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -23,7 +23,7 @@ tags:
23
  ## ALPHA, based on experimental WIP code, expect bugs, not for the faint of heart
24
 
25
  - Not supported in llama.cpp master; Requires the latest version of the phi3 128k [branch](https://github.com/ggerganov/llama.cpp/pull/7225)
26
- - just bf16 for now, quants & imatrix are still in the oven will follow soon TM
27
  <!-- - quants done with an importance matrix for improved quantization loss -->
28
  <!-- - gguf & imatrix generated from bf16 for "optimal" accuracy loss (some say this is snake oil, but it can't hurt) -->
29
  <!-- - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S -->
 
23
  ## ALPHA, based on experimental WIP code, expect bugs, not for the faint of heart
24
 
25
  - Not supported in llama.cpp master; Requires the latest version of the phi3 128k [branch](https://github.com/ggerganov/llama.cpp/pull/7225)
26
+ - just bf16 for now, quants & imatrix are still in the oven & will follow soon TM
27
  <!-- - quants done with an importance matrix for improved quantization loss -->
28
  <!-- - gguf & imatrix generated from bf16 for "optimal" accuracy loss (some say this is snake oil, but it can't hurt) -->
29
  <!-- - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S -->