Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
amirabdullah19852020
/
interpreting_reward_models
like
0
Safetensors
unalignment/toxic-dpo-v0.2
Anthropic/hh-rlhf
stanfordnlp/imdb
English
License:
mit
Model card
Files
Files and versions
Community
main
interpreting_reward_models
/
models
/
hh_rlhf
1 contributor
History:
7 commits
amirabdullah19852020
Upload folder using huggingface_hub
d4553a2
verified
8 months ago
gemma-2b-it
Upload folder using huggingface_hub
8 months ago
gpt-neo-125m
Upload folder using huggingface_hub
8 months ago
pythia-160m
Upload folder using huggingface_hub
8 months ago
pythia-410m
Upload folder using huggingface_hub
8 months ago
pythia-70m
Upload folder using huggingface_hub
8 months ago