File size: 494 Bytes
93ecc83
 
 
7a1b503
 
 
 
 
 
 
2ebf61a
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
---
license: apache-2.0
---

# Poro-34B-gguf

This is a GGUF quantization of the [Poro-34B](https://huggingface.co/LumiOpen/Poro-34B) model.

Please refer to that repository's model card for details.

The current revision is a quantization of the 700B token checkpoint.

The conversion was done with [llama.cpp](https://github.com/ggerganov/llama.cpp) version b1641 (6744dbe924a317e3e2a5a2a4a2037061b2223449)
on a Google Compute machine generously sponsored by [Valohai](https://valohai.com/).