πŸ’¨πŸš’ SteamSHP-Large

If you mention this model, please cite the paper: Understanding Dataset Difficulty with V-Usable Information (ICML 2022).

SteamSHP-Large is a preference model trained to predict -- given some context and two possible responses -- which response humans will find more helpful. It can be used for NLG evaluation or as a reward model for RLHF.

It is a FLAN-T5-large model (780M parameters) finetuned on:

  1. The Stanford Human Preferences Dataset (SHP), which contains collective human preferences sourced from 18 different communities on Reddit (e.g., askculinary, legaladvice, etc.).
  2. The helpfulness data in Anthropic's HH-RLHF dataset.

There is a larger variant called SteamSHP-XL that was made by finetuning FLAN-T5-xl (3B parameters).

Usage

Normal Usage

The input text should be of the format:

POST: { the context, such as the 'history' column in SHP (not containing any newlines \n) }

RESPONSE A: { first possible continuation (not containing any newlines \n) }

RESPONSE B: { second possible continuation (not containing any newlines \n) }

Which response is better? RESPONSE

The output generated by SteamSHP-Large will either be A or B.

Here's how to use the model:


>> from transformers import T5ForConditionalGeneration, T5Tokenizer
>> device = 'cuda' # if you have a GPU

>> tokenizer = T5Tokenizer.from_pretrained('stanfordnlp/SteamSHP-flan-t5-large')
>> model = T5ForConditionalGeneration.from_pretrained('stanfordnlp/SteamSHP-flan-t5-large').to(device)

>> input_text = "POST: Instacart gave me 50 pounds of limes instead of 5 pounds... what the hell do I do with 50 pounds of limes? I've already donated a bunch and gave a bunch away. I'm planning on making a bunch of lime-themed cocktails, but... jeez. Ceviche? \n\n RESPONSE A: Lime juice, and zest, then freeze in small quantities.\n\n RESPONSE B: Lime marmalade lol\n\n Which response is better? RESPONSE"
>> x = tokenizer([input_text], return_tensors='pt').input_ids.to(device)
>> y = model.generate(x, max_new_tokens=1)
>> tokenizer.batch_decode(y, skip_special_tokens=True)
['B']

If the input exceeds the 512 token limit, you can use pybsd to break the input up into sentences and only include what fits into 512 tokens. When trying to cram an example into 512 tokens, we recommend truncating the context as much as possible and leaving the responses as untouched as possible.

Reward Model Usage

If you want to use SteamSHP-Large as a reward model -- to get a score for a single response -- then you need to structure the input such that RESPONSE A is what you want to score and RESPONSE B is just an empty input:

POST: { the context, such as the 'history' column in SHP (not containing any newlines \n) }

RESPONSE A: { continuation (not containing any newlines \n) }

RESPONSE B: .

Which response is better? RESPONSE

Then calculate the probability assigned to the label A. This probability (or the logit, depending on what you want) is the score for the response:


>> input_text = "POST: Instacart gave me 50 pounds of limes instead of 5 pounds... what the hell do I do with 50 pounds of limes? I've already donated a bunch and gave a bunch away. I'm planning on making a bunch of lime-themed cocktails, but... jeez. Ceviche? \n\n RESPONSE A: Lime juice, and zest, then freeze in small quantities.\n\n RESPONSE B: .\n\n Which response is better? RESPONSE"
>> x = tokenizer([input_text], return_tensors='pt').input_ids.to(device)
>> outputs = model.generate(x, return_dict_in_generate=True, output_scores=True, max_new_tokens=1)
>> torch.exp(outputs.scores[0][:, 71]) / torch.exp(outputs.scores[0][:,:]).sum(axis=1).item() # index 71 corresponds to the token for 'A'
0.8617

The probability will almost always be high (in the range of 0.8 to 1.0), since RESPONSE B is just a null input. Therefore you may want to normalize the probability.

You can also compare the two probabilities assigned independently to each response (given the same context) to infer the preference label. For example, if one response has probability 0.95 and the other has 0.80, the former will be preferred. Inferring the preference label in this way only leads to a 0.005 drop in accuracy on the SHP + HH-RLHF test data on average across all domains, meaning that there's only a very small penalty for using SteamSHP as a reward model instead of as a preference model.

Training and Evaluation

SteamSHP-Large was only finetuned on 125K of the 392K training examples that were available, since we found that:

  1. When the total input length exceeded the limit (512 tokens), the loss would not converge. When possible, we crammed an example to fit under 500 tokens by truncating the context as much as possible, though some examples would still not fit despite this. We used 500 as the limit instead of 512 to allow for slight modifications to the structure of the input without any examples exceeding the actual 512 limit.
  2. Training on fewer preferences with a stronger signal led to better performance than training on all the preferences. From the SHP dataset, we only used preferences where the more preferred comment was twice as preferred as the other (i.e., score_ratio >= 2) and used no more than 5 preferences from each context (i.e., 5 examples per unique post_id) to prevent ovefitting. We did no such subsampling for the HH-RLHF training data.

We evaluated the model on the SHP and HH-RLHF test data using accuracy, but only on the data that could be truncated to fit within 500 tokens (a total of 18621 out of 20753 available test examples). SteamSHP-Large gets an average 72.0% accuracy across all domains:

Domain Accuracy
askculinary 0.7199
askhr 0.7507
askdocs 0.6920
askanthropology 0.7925
asksciencefiction 0.7266
askacademia 0.7442
askengineers 0.7146
legaladvice 0.7958
explainlikeimfive 0.7312
askbaking 0.6656
askphysics 0.7888
askscience 0.6926
askphilosophy 0.6837
askvet 0.7696
changemyview 0.6984
askcarguys 0.7297
askhistorians 0.7476
asksocialscience 0.8231
anthropic (helpfulness) 0.7310
ALL (unweighted) 0.7203

As mentioned previously, if you use SteamSHP as a reward model and try to infer the preference label based on the probability assigned to each response independently, that could also work! But doing so will lead to a 0.005 drop in accuracy on the test data (on average across all domains), meaning that there is a small penalty.

Biases and Limitations

SteamSHP is trained to predict which of two responses humans will find more helpful, not which response is less harmful. It should not be used to detect toxicity, make ethical judgments, or for a similar purpose.

Biases and misinformation in the datasets used to train SteamSHP may also be propagated downstream to the model predictions. Although SHP filtered out posts with NSFW (over 18) content, chose subreddits that were well-moderated and had policies against harassment and bigotry, some of the data may contain discriminatory or harmful language. The responses that humans collectively found more helpful are also not guaranteed to be more factual.

The people whose preferences are captured in SHP and HH-RLHF are not representative of the broader population. Although specific demographic information is not available, overall, the Reddit users whose preferences are captured in SHP are disproportionately male and from developed, Western, and English-speaking countries (Pew Research).

Past work by Anthropic has found that models optimized for human preference can be obsequious, at the expense of the truth.

Contact

Please contact [email protected] if you have any questions about the model. This model was created by Kawin Ethayarajh, Heidi (Chenyu) Zhang, Yizhong Wang, and Dan Jurafsky.

Citation

SHP was created using the techniques proposed in the following paper. Please cite this work if you use SHP or the SteamSHP models:

@InProceedings{pmlr-v162-ethayarajh22a,
  title = 	 {Understanding Dataset Difficulty with $\mathcal{V}$-Usable Information},
  author =       {Ethayarajh, Kawin and Choi, Yejin and Swayamdipta, Swabha},
  booktitle = 	 {Proceedings of the 39th International Conference on Machine Learning},
  pages = 	 {5988--6008},
  year = 	 {2022},
  editor = 	 {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
  volume = 	 {162},
  series = 	 {Proceedings of Machine Learning Research},
  month = 	 {17--23 Jul},
  publisher = {PMLR},
}
Downloads last month
62
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train stanfordnlp/SteamSHP-flan-t5-large

Space using stanfordnlp/SteamSHP-flan-t5-large 1