File size: 1,189 Bytes
8dc9a1e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
# Trainer
At TRL we support PPO (Proximal Policy Optimisation) with an implementation that largely follows the structure introduced in the paper "Fine-Tuning Language Models from Human Preferences" by D. Ziegler et al. [[paper](https://huggingface.co/papers/1909.08593), [code](https://github.com/openai/lm-human-preferences)].
The Trainer and model classes are largely inspired from `transformers.Trainer` and `transformers.AutoModel` classes and adapted for RL.
We also support a `RewardTrainer` that can be used to train a reward model.
## CPOConfig
[[autodoc]] CPOConfig
## CPOTrainer
[[autodoc]] CPOTrainer
## DDPOConfig
[[autodoc]] DDPOConfig
## DDPOTrainer
[[autodoc]] DDPOTrainer
## DPOTrainer
[[autodoc]] DPOTrainer
## IterativeSFTTrainer
[[autodoc]] IterativeSFTTrainer
## KTOConfig
[[autodoc]] KTOConfig
## KTOTrainer
[[autodoc]] KTOTrainer
## ORPOConfig
[[autodoc]] ORPOConfig
## ORPOTrainer
[[autodoc]] ORPOTrainer
## PPOConfig
[[autodoc]] PPOConfig
## PPOTrainer
[[autodoc]] PPOTrainer
## RewardConfig
[[autodoc]] RewardConfig
## RewardTrainer
[[autodoc]] RewardTrainer
## SFTTrainer
[[autodoc]] SFTTrainer
## set_seed
[[autodoc]] set_seed
|