Model Card for llm-course-hw2-ppo

This model is a fine-tuned version of HuggingFaceTB/SmolLM-135M-Instruct. It has been trained using TRL.

This model is an aligned version of HuggingFaceTB/SmolLM-135M-Instruct. Method used for training is PPO.

Final rlhf reward is 1.60, final score is 1.66.

Example of usage

DEVICE = torch.device("cuda")
tokenizer = AutoTokenizer.from_pretrained(efromomr/llm-course-hw2-dpo)
check_model = AutoModelForCausalLM.from_pretrained(efromomr/llm-course-hw2-dpo)
check_model = check_model.to(DEVICE)
check_model = check_model.eval()

messages = [{"role": "user", "content": "What's your morning routine like?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt")

generated_ids = model.generate(model_inputs.input_ids.to(DEVICE), max_new_tokens=256, do_sample=True)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(response)
#user
#What's your morning routine like?
#assistant
#As I sit here, I'm already feeling a bit...off. I'm trying to get out of bed, but it's hard to get out of bed. I'm trying to get some morning sunlight, but it's just not happening. I'm also feeling a bit...uncomfortable. I'm trying to get some fresh air, but it's just not happening. I'm trying to get some morning exercise, but it's just not happening. I'm trying to get some morning meditation, but it's just not happening.

Training procedure

Visualize in Weights & Biases

This model was trained with PPO, a method introduced in Fine-Tuning Language Models from Human Preferences.

Framework versions

  • TRL: 0.15.2
  • Transformers: 4.47.0
  • Pytorch: 2.5.1+cu121
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0

Citations

Cite PPO as:

@article{mziegler2019fine-tuning,
    title        = {{Fine-Tuning Language Models from Human Preferences}},
    author       = {Daniel M. Ziegler and Nisan Stiennon and Jeffrey Wu and Tom B. Brown and Alec Radford and Dario Amodei and Paul F. Christiano and Geoffrey Irving},
    year         = 2019,
    eprint       = {arXiv:1909.08593}
}

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month
24
Safetensors
Model size
135M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for efromomr/llm-course-hw2-ppo

Finetuned
(63)
this model

Collection including efromomr/llm-course-hw2-ppo