Text Generation
Transformers
Safetensors
English
olmo
conversational
Inference Endpoints
hamishivi commited on
Commit
914a678
1 Parent(s): 3f9e038

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -2,7 +2,7 @@
2
  license: apache-2.0
3
  datasets:
4
  - allenai/dolma
5
- - allenai/tulu-v2-sft-mixture
6
  - allenai/ultrafeedback_binarized_cleaned
7
  language:
8
  - en
@@ -19,7 +19,7 @@ language:
19
 
20
  OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
21
  The OLMo base models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
22
- The adapted versions are trained on the [Tulu SFT mixture](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) and, for the Instruct version, a [cleaned version of the UltraFeedback dataset](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned).
23
 
24
  OLMo 7B April 2024 Instruct and OLMo SFT are two adapted versions of these models trained for better question answering.
25
  They are based on the OLMo 7B April release (previously called OLMo 1.7).
@@ -30,8 +30,8 @@ They show the performance gain that OLMo base models can achieve with existing f
30
  We release two adapted model versions:
31
  | Model | Training Method(s) | Datasets | Context Length |
32
  |------|--------|---------|--|
33
- | [OLMo 7B April 2024 SFT](https://huggingface.co/allenai/OLMo-1.7-7B-SFT-hf) | SFT | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) | 2048 |
34
- | [OLMo 7B April 2024 Instruct](https://huggingface.co/allenai/OLMo-1.7-7B-Instruct-hf) | SFT + DPO | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) + [Ultrafeedback Cleaned](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned) | 2048 |
35
 
36
  These models are both trained on top of OLMo 7B April 2024 release (formerly called OLMo 1.7):
37
  | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
@@ -106,7 +106,7 @@ Core model results for the 7B adapted models are found below.
106
  ## Model Details
107
 
108
  ### Data
109
- For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma), [Tulu 2](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), and [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) documentation.
110
 
111
  ### Architecture
112
 
 
2
  license: apache-2.0
3
  datasets:
4
  - allenai/dolma
5
+ - allenai/tulu-v2-sft-mixture-olmo-4096
6
  - allenai/ultrafeedback_binarized_cleaned
7
  language:
8
  - en
 
19
 
20
  OLMo is a series of **O**pen **L**anguage **Mo**dels designed to enable the science of language models.
21
  The OLMo base models are trained on the [Dolma](https://huggingface.co/datasets/allenai/dolma) dataset.
22
+ The adapted versions are trained on the [Tulu SFT mixture](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture-olmo-4096) and, for the Instruct version, a [cleaned version of the UltraFeedback dataset](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned).
23
 
24
  OLMo 7B April 2024 Instruct and OLMo SFT are two adapted versions of these models trained for better question answering.
25
  They are based on the OLMo 7B April release (previously called OLMo 1.7).
 
30
  We release two adapted model versions:
31
  | Model | Training Method(s) | Datasets | Context Length |
32
  |------|--------|---------|--|
33
+ | [OLMo 7B April 2024 SFT](https://huggingface.co/allenai/OLMo-1.7-7B-SFT-hf) | SFT | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture-olmo-4096) | 4096 |
34
+ | [OLMo 7B April 2024 Instruct](https://huggingface.co/allenai/OLMo-1.7-7B-Instruct-hf) | SFT + DPO | [Tulu 2 SFT Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture-olmo-4096) + [Ultrafeedback Cleaned](https://huggingface.co/datasets/allenai/ultrafeedback_binarized_cleaned) | 4096 |
35
 
36
  These models are both trained on top of OLMo 7B April 2024 release (formerly called OLMo 1.7):
37
  | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
 
106
  ## Model Details
107
 
108
  ### Data
109
+ For training data details, please see the [Dolma](https://huggingface.co/datasets/allenai/dolma), [Tulu 2](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture-olmo-4096), and [UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) documentation.
110
 
111
  ### Architecture
112