---
tags:
- ctranslate2
---
"Ctranslate2" is an amazing library that runs these models. They are faster, more accurate, and use less VRAM/RAM than GGML and GPTQ models.
How to run with instructions: https://github.com/BBC-Esq
- COMING SOON
Learn more about the amazing "ctranslate2" technology:"
- https://github.com/OpenNMT/CTranslate2
- https://opennmt.net/CTranslate2/index.html
COMPARED to GGML:
- The VRAM numbers includes other programs running and a second monitor so people can get a realistic idea of how much VRAM/RAM is needed.
- THE FASTER AND HIGHER-QUALITY INT8 "ctranslate2" 7b model uses the same amount of VRAM as the far-inferior 3-bit "k_m" GGML version!!
Comments |
Quant. Method |
Quant. Bit |
Model Specifics |
Params. |
VRAM Usage |
Size on disk |
Original llama2 models | | 32-bit | llama2-13b-chat-hf | 13B | high | 24.2 (GB) |
| | 32-bit | llama2-7b-chat-hf | 7B | high | 12.5 (GB) |
Comparable 13B models in terms of quality | ggml | 8-bit | llama2-13b-chat_Q8_0 | 13B | 21.2 (GB) | 12.8 (GB) |
| ctranslate2 | 8-bit | Llama-2-13b-chat-hf-ct2-int8 | 13B | 16 (GB) | 6.28 (GB) |
Comparable 7B models in terms of quality | ggml | 8-bit | llama2-7b-chat_Q8_0 | 7B | 12 (GB) | 6.66 (GB) |
| ctranslate2 | 8-bit | Llama-2-7b-chat-hf-ct2-int8 | 7B | 10.2 (GB) | 6.28 (GB) |
ggml quants lower than 8-bit for additional comparison | ggml | 8-bit | llama2-7b-chat_Q6_k | 7B | 11.3 (GB) | 5.14 (GB) |
| ggml | 5-bit | llama2-7b-chat_Q5_k_m | 7B | 11.6 (GB) | 4.45 (GB) |
| ggml | 5-bit | llama2-7b-chat_5_k_s | 7B | 11.4 (GB) | 4.33 (GB) |
| ggml | 4-bit | llama2-7b-chat_4_k_m | 7B | 11 (GB) | 3.79 (GB) |
| ggml | 4-bit | llama2-7b-chat_4_k_s | 7B | 10.8 (GB) | 3.56 (GB) |
| ggml | 3-bit | llama2-7b-chat_3_k_l | 7B | 10.5 (GB) | 3.34 (GB) |
| ggml | 3-bit | llama2-7b-chat_3_k_m | 7B | 10.3 (GB) | 3.05 (GB) |
| ggml | 3-bit | llama2-7b-chat_3_k_s | 7B | 10 (GB) | 2.74 (GB) |
Information:
| Format | Approximate Size Compared to `float32` | Nvidia GPU Required "Compute" | Accuracy Summary |
|-----------------|----------------------------|-----------------|--------------------------|
| `float32` | 100% | 1.0 | Offers more precision and a wider range. Most un-quantized models use this. |
| `int16` | 51.37% | 1.0 | Same as `int8` but with a larger range. |
| `float16` | 50.00% | 5.3 (e.g. Nvidia 10 Series and Higher) | Suitable for scientific computations; balance between precision and memory. |
| `bfloat16` | 50.00% | 8.0 (e.g. Nvidia 30 Series and Higher) | Often used in neural network training; larger exponent range than `float16`. |
| `int8_float32` | 27.47% | test manually (see below) | Combines low precision integer with high precision float. Useful for mixed data. |
| `int8_float16` | 26.10% | test manually (see below) | Combines low precision integer with medium precision float. Saves memory. |
| `int8_bfloat16` | 26.10% | test manually (see below) | Combines low precision integer with reduced precision float. Efficient for neural nets. |
| `int8` | 25% | 1.0 | Lower precision, suitable for whole numbers within a specific range. Often used where memory is crucial. |
| Web Link | Description |
|-------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------|
| [CUDA GPUs Supported](https://en.wikipedia.org/wiki/CUDA#GPUs_supported) | See what level of "compute" your Nvidia GPU supports. |
| [CTranslate2 Quantization](https://opennmt.net/CTranslate2/quantization.html#implicit-type-conversion-on-load) | Even if your GPU/CPU doesn't support the data type of the model you download, "ctranslate2" will automatically run the model in a way that's compatible. |
| [Bfloat16 Floating-Point Format](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format#bfloat16_floating-point_format) | Visualize data formats. |
| [Nvidia Floating-Point](https://docs.nvidia.com/cuda/floating-point/index.html) | Technical discussion. |
You can also check compatibility manually AFTER pip installing "ctranslate2." Then, within the virtual environment where "ctranslate2" is installed open a command prompt and run the following commands:
```
python
```
```python
import ctranslate2
```
Check GPU/CUDA compatibility:
```python
ctranslate2.get_supported_compute_types("cuda")
```
Check CPU compatibility:
```python
ctranslate2.get_supported_compute_types("cpu")
```
It will print out your CPU/GPU compatibility.