Model Trained Using AutoTrain

This model was trained using AutoTrain. For more information, please visit AutoTrain.

Usage


from transformers import AutoModelForCausalLM, AutoTokenizer

model_path = "mrcuddle/Ministral-Instruct-2410-8B-DPO-RP"

tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
    model_path,
    device_map="auto",
    torch_dtype='auto'
).eval()

# Prompt content: "hi"
messages = [
    {"role": "user", "content": "hi"}
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

# Model response: "Hello! How can I assist you today?"
print(response)
Downloads last month
161
Safetensors
Model size
8.02B params
Tensor type
FP16
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for mrcuddle/Ministral-Instruct-2410-8B-DPO-RP

Finetuned
(24)
this model
Quantizations
2 models

Dataset used to train mrcuddle/Ministral-Instruct-2410-8B-DPO-RP

Space using mrcuddle/Ministral-Instruct-2410-8B-DPO-RP 1