Update README.md
Browse files
README.md
CHANGED
@@ -41,8 +41,9 @@ This pipeline can be broken up into three key steps:
|
|
41 |
|
42 |
2. **Reward Model (RM) fine-tuning:** See [here](https://huggingface.co/FSALab/fsalab-chat-opt-350m-reward-deepspeed)
|
43 |
|
44 |
-
3. **Reinforcement-learning from Human feedback (RLHF) fine-tuning:** At the completion of the prior two steps, the final RLHF fine-tuning can be initiated. This involves the collection of both the *fine-tuned model* from step 1 and the *reward model** from step 2 and train them on the data-set with comparisons. This generates both an [actor](https://huggingface.co/FSALab/fsalab-chat-opt-1.3b-rlhf-actor-deepspeed) and **critic** model.
|
45 |
|
|
|
46 |
To view the details behind each step head into their respective links and view the model card there.
|
47 |
|
48 |
### Reinforcement learning from human feedback
|
|
|
41 |
|
42 |
2. **Reward Model (RM) fine-tuning:** See [here](https://huggingface.co/FSALab/fsalab-chat-opt-350m-reward-deepspeed)
|
43 |
|
44 |
+
3. **Reinforcement-learning from Human feedback (RLHF) fine-tuning:** At the completion of the prior two steps, the final RLHF fine-tuning can be initiated. This involves the collection of both the *fine-tuned model* from step 1 and the *reward model** from step 2 and train them on the data-set with comparisons. This generates both an [actor](https://huggingface.co/FSALab/fsalab-chat-opt-1.3b-rlhf-actor-deepspeed) and **critic** model. I also generate an *[actor model](https://huggingface.co/FSALab/chat-opt-1.3b-rlhf-actor-ema-deepspeed) with an exponential moving average (EMA)* which is known to improve conversational response quality.
|
45 |
|
46 |
+
|
47 |
To view the details behind each step head into their respective links and view the model card there.
|
48 |
|
49 |
### Reinforcement learning from human feedback
|