Spaces:
Running
Running
rdiehlmartinez
commited on
Commit
•
ec87c19
1
Parent(s):
af23c5d
Updating README
Browse files
README.md
CHANGED
@@ -21,8 +21,10 @@ description: >-
|
|
21 |
|
22 |
# Perplexity Metric
|
23 |
|
24 |
-
> ⚠️ **This is a fork of the huggingface evaluate library's implementation of perplexity.**
|
25 |
|
|
|
|
|
26 |
|
27 |
## Metric Description
|
28 |
|
|
|
21 |
|
22 |
# Perplexity Metric
|
23 |
|
24 |
+
> ⚠️ **This is a fork of the (huggingface evaluate)[https://huggingface.co/spaces/evaluate-metric/perplexity] library's implementation of perplexity.**
|
25 |
|
26 |
+
Out of the box, Pico supports evaluating on (Paloma)[https://huggingface.co/datasets/allenai/paloma], a comprehensive evaluation benchmark for large language models (LLMs) that focuses
|
27 |
+
on measuring perplexity across diverse text domains. We use the perplexity metric in this space to compute perplexity on Paloma.
|
28 |
|
29 |
## Metric Description
|
30 |
|