prithivMLmods's picture
Update README.md
0311e7b verified
|
raw
history blame
2.11 kB
metadata
license: creativeml-openrail-m
language:
  - en
library_name: transformers
pipeline_tag: text-generation
tags:
  - '1.0e-5'

Llama-Thinker-3B-Preview

Llama-Thinker-3B-Preview is a pretrained and instruction-tuned generative model designed for multilingual applications. These models are trained using synthetic datasets based on long chains of thought, enabling them to perform complex reasoning tasks effectively.

Model Architecture: [ Based on Llama 3.2 ] is an autoregressive language model that uses an optimized transformer architecture. The tuned versions undergo supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.

Use with transformers

Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.

Make sure to update your transformers installation via pip install --upgrade transformers.

import torch
from transformers import pipeline

model_id = "prithivMLmods/Llama-Thinker-3B-Preview"
pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
messages = [
    {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
    {"role": "user", "content": "Who are you?"},
]
outputs = pipe(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])

Note: You can also find detailed recipes on how to use the model locally, with torch.compile(), assisted generations, quantised and more at huggingface-llama-recipes

Use with llama

Please, follow the instructions in the repository

To download Original checkpoints, see the example command below leveraging huggingface-cli:

huggingface-cli download prithivMLmods/Llama-Thinker-3B-Preview --include "original/*" --local-dir Llama-Thinker-3B-Preview