Perplexity scores for a Herd of 13B Llamas

#1
by flyingkiwiguy - opened
  1. Perplexities calculated using build = 635 (5c64a09) of llama.cpp and the first 406 lines of wiki.test.raw
  2. Previous perplexity benchmarking for llamas indicated that 406 lines is enough to compare different sizes and quantization levels

image.png

flyingkiwiguy changed discussion title from Perplexity scores for a Herd of 13B LLamas to Perplexity scores for a Herd of 13B Llamas

Sign up or log in to comment