Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- allenai/SciRIFF-train-mix
|
5 |
+
- allenai/tulu-v2-sft-mixture
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
---
|
9 |
+
# Model Card for SciTulu 7B
|
10 |
+
|
11 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/5ff4e2a1463be69ae4bd42bd/i13UwWDQ8gmTYzOr-cs3S.png)
|
12 |
+
|
13 |
+
SciTulu is a collection of instruction-following language models targeting scientific literature understanding use cases. Starting from the [Tulu v2 7B](https://huggingface.co/allenai/tulu-2-7b) model, SciTulu is trained on a mix of science-specific demonstrations from the [SciRIFF dataset](https://huggingface.co/datasets/allenai/SciRIFF-train-mix), together with general-domain instructions from the [Tulu v2 SFT mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture). SciTulu 7B achives a 28.1% average improvement over Tulu v2 7B on nine held-out scientific literature understanding tasks. More information can be found in our preprint: [SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature](https://arxiv.org/abs/2406.07835).
|
14 |
+
|
15 |
+
Training and evaluation code for SciTulu is available in our GitHub repository: https://huggingface.co/datasets/allenai/SciRIFF.
|
16 |
+
|
17 |
+
See the [Tulu model card](https://huggingface.co/allenai/tulu-2-7b) for more information on potential risks, biases, and limitations.
|