viren-shah commited on
Commit
cefc71d
1 Parent(s): e8cc5ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -146,7 +146,11 @@ Like all LLMs, SN-13B-8k-Instruct has certain limitations:
146
 
147
  ## Acknowledgment
148
 
149
- We appreciate [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [HELM](https://crfm.stanford.edu/helm/latest/) for their essential benchmarking contributions, which were both very helpful in evaluating SN-13B-8k-Instruct's performance. We appreciate the inspiration from the wave of various recent open-source long sequence models, including [XGen](https://blog.salesforceairesearch.com/xgen/), [MPT](https://www.mosaicml.com/blog/long-context-mpt-7b-8k), and [Llama-2](https://ai.meta.com/llama/) and so on. We look forward to witnessing the continued growth and success of open-source long sequence models.
 
 
 
 
150
 
151
  We highly appreciate the hard work and dedication of these researchers and organizations towards the advancement of the open-source community. Their contributions were invaluable in the development of SN-13B-8k-Instruct, and we hope that our model can contribute to further advancements in the field.
152
 
 
146
 
147
  ## Acknowledgment
148
 
149
+ We appreciate [Scrolls](https://www.scrolls-benchmark.com/) and [ZeroScrolls](https://www.zero.scrolls-benchmark.com/) for their contributions to creating effective benchmarks to test the long sequence understanding of Large Language Models.
150
+ We appreciate [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [HELM](https://crfm.stanford.edu/helm/latest/) for their essential benchmarking contributions,
151
+ which were both very helpful in evaluating SN-13B-8k-Instruct's performance. We appreciate the inspiration from the wave of various recent open-source long sequence models,
152
+ including [XGen](https://blog.salesforceairesearch.com/xgen/), [MPT](https://www.mosaicml.com/blog/long-context-mpt-7b-8k), and
153
+ [Llama-2](https://ai.meta.com/llama/) and so on. We look forward to witnessing the continued growth and success of open-source long sequence models.
154
 
155
  We highly appreciate the hard work and dedication of these researchers and organizations towards the advancement of the open-source community. Their contributions were invaluable in the development of SN-13B-8k-Instruct, and we hope that our model can contribute to further advancements in the field.
156