viren-shah
commited on
Commit
•
b4e9608
1
Parent(s):
cefc71d
Update README.md
Browse files
README.md
CHANGED
@@ -146,7 +146,7 @@ Like all LLMs, SN-13B-8k-Instruct has certain limitations:
|
|
146 |
|
147 |
## Acknowledgment
|
148 |
|
149 |
-
We appreciate [Scrolls](https://www.scrolls-benchmark.com/) and [ZeroScrolls](https://www.zero.scrolls-benchmark.com/) for their contributions
|
150 |
We appreciate [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [HELM](https://crfm.stanford.edu/helm/latest/) for their essential benchmarking contributions,
|
151 |
which were both very helpful in evaluating SN-13B-8k-Instruct's performance. We appreciate the inspiration from the wave of various recent open-source long sequence models,
|
152 |
including [XGen](https://blog.salesforceairesearch.com/xgen/), [MPT](https://www.mosaicml.com/blog/long-context-mpt-7b-8k), and
|
|
|
146 |
|
147 |
## Acknowledgment
|
148 |
|
149 |
+
We appreciate [Scrolls](https://www.scrolls-benchmark.com/) and [ZeroScrolls](https://www.zero.scrolls-benchmark.com/) for their contributions in creating effective benchmarks to test the long sequence understanding of Large Language Models.
|
150 |
We appreciate [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [HELM](https://crfm.stanford.edu/helm/latest/) for their essential benchmarking contributions,
|
151 |
which were both very helpful in evaluating SN-13B-8k-Instruct's performance. We appreciate the inspiration from the wave of various recent open-source long sequence models,
|
152 |
including [XGen](https://blog.salesforceairesearch.com/xgen/), [MPT](https://www.mosaicml.com/blog/long-context-mpt-7b-8k), and
|