Text Generation
Transformers
bloom
Eval Results
TheBloke commited on
Commit
6e23c46
1 Parent(s): 8802665

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -860,7 +860,7 @@ It was created with group_size none (-1) to reduce VRAM usage, and with --act-or
860
  * `gptq_model-4bit-128g.safetensors`
861
  * Works with AutoGPTQ in CUDA or Triton modes.
862
  * Does NOT work with [ExLlama](https://github.com/turboderp/exllama) as it's not a Llama model.
863
- * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
864
  * Works with text-generation-webui, including one-click-installers.
865
  * Parameters: Groupsize = -1. Act Order / desc_act = True.
866
 
 
860
  * `gptq_model-4bit-128g.safetensors`
861
  * Works with AutoGPTQ in CUDA or Triton modes.
862
  * Does NOT work with [ExLlama](https://github.com/turboderp/exllama) as it's not a Llama model.
863
+ * Untested with GPTQ-for-LLaMa.
864
  * Works with text-generation-webui, including one-click-installers.
865
  * Parameters: Groupsize = -1. Act Order / desc_act = True.
866