Momo-70b-DPO-mixed / README.md
librarian-bot's picture
Librarian Bot: Add base_model metadata to model
2097547 verified
|
raw
history blame
1.02 kB
---
license: mit
tags:
- mergekit
- merge
base_model:
- moreh/MoMo-70B-lora-1.8.6-DPO
- moreh/MoMo-70B-lora-1.8.4-DPO
---
# MoMo-70B-lora-1.8.6-DPO based model with gradient slerp
This is an English mixed Model based on
* [moreh/MoMo-70B-lora-1.8.6-DPO]
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "kodonho/kodonho/Momo-70b-DPO-mixed"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```