Perplexity scores for a Herd of 13B Llamas
#1
by
flyingkiwiguy
- opened
- Perplexities calculated using
build = 635 (5c64a09)
of llama.cpp and the first 406 lines of wiki.test.raw - Previous perplexity benchmarking for llamas indicated that 406 lines is enough to compare different sizes and quantization levels
flyingkiwiguy
changed discussion title from
Perplexity scores for a Herd of 13B LLamas
to Perplexity scores for a Herd of 13B Llamas